no one's getting a million dollars and an invitation the the beisutsukai
honored.
I like object-level posts that also aren't about AI. They're a minority on LW now, so they feel like high signals in a sea of noise. (That doesn't mean they're necessarily more signally, just that the rarity makes it seem that way to me.)
It felt odd to read that and think "this isn't directed toward me, I could skip if I wanted to". Like I don't know how to articulate the feeling, but it's an odd "woah text-not-for-humans is going to become more common isn't it". Just feels strange to be left behind.
Thank you for this. I feel like a general policy of "please at least disclose" would make me feel significantly less insane when reading certain posts.
Have you tried iterating on this? Like, the "I don't care about the word prodrome'" sounds like the kind of thing you could include in your prompt and reiterate until everything you don't like about the LLM's responses is solved or you run out of ideas.
Also fyi ChatGPT Deep Research uses the "o3" model, not 4o, even if it says 4o at the top left (you can try running Deep Research with any of the models selected in the top left and it will output the same kind of thing).
o3 was RLed (!) into being particularly good at web search (and tangential skills ...
My highlight link didn't work but in the second example, this is the particular passage that drove me crazy:
The punchline works precisely because we recognize that slightly sheepish feeling of being reflexively nice to inanimate objects. It transforms our "irrational" politeness into accidental foresight.
The joke hints at an important truth, even if it gets the mechanism wrong: our conversations with current artificial intelligences may not be as consequence-free as they seem.
That's fair, I think I was being overconfident and frustrated, such that these don't express my real preferences.
But I did make it clear these were preferences unrelated to my call, which was "you should warn people" not "you should avoid direct LLM output entirely". I wouldn't want such a policy, and wouldn't know how to enforce it anyway.
I think I'm allowed to have an unreasonable opinion like "I will read no LLM output I don't prompt myself, please stop shoving it into my face" and not get called on epistemic grounds except in the context of "wait...
If it doesn't clutter the UI too much, I think an explicit message near the submit button saying "please disclose if part of your post is copy-pasted from an LLM" would go a long way!
If this is the way the LW garden-keepers feel about LLM output, then why not make that stance more explicit? Can't find a policy for this in the FAQ either!
I think some users here think LLM output can be high value reading and they don't think a warning is necessary—that they're acting in good faith and would follow a prompt to insert a warning if given.
Touching. Thank you for this.
When I was 11 I cut off some of my very-much-alive cat's fur to ensure future cloning would be possible, and put it in a little plastic bag I hid from my parents. He died when I was 15, and the bag is still somewhere in my Trunk of Everything.
I don't imagine there's much genetic content left but also I have a vague intuition that we severely underestimate how much information a superintelligence could extract from reality—so I'll keep onto a lingering hope.
My past self would have wanted me to keep tabs on how the te...
Can we have a LessWrong official stance to LLM writing?
The last 2 posts I read contained what I'm ~95% sure is LLM writing, and both times I felt betrayed, annoyed, and desirous to skip ahead.
I would feel saner if there were a "this post was partially AI written" tag authors could add to as a warning. I think an informal standard of courteously warning people could work too, but that requires slow coordination-by-osmosis.
Unrelatedly to my call, and as a personal opinion, I don't think you're adding any value to me if you include even a single p...
My calendar reminder didn't go off, are submissions closed-closed?
Oh yeah no problem with writing with LLMs, only doing it without disclosing it. Though I guess this wasn't the case here, sry for flagging this.
I'm not sure I want to change my approach next time though, bc I do feel like I should be on my toes. Beware of drifting too much toward the LLM's stylebook I guess.
Maybe I'm going crazy, but the frequent use of qualifiers for almost every noun in your writing screams of "LLM" to me. Did you use LLM assistance? I don't get that same feel from your comments, so I'm learning toward an AI having written only the Shortform itself.
If you did use AI, I'd be in favor of you disclosing that so that people like me don't feel like they're gradually going insane.
If not, then I'm sorry and retract this. (Though not sure what to tell you—I think this writing style feels too formal and filled with fluff like "crucial" or "invaluable", and I bet you'll increasingly be taken for an AI in other contexts.)
Sent it in!
The original post, the actual bet, and the short scuffle in the comments is exactly the kind of epistemic virtue, basic respect, and straight-talking object-level discussion that I like about LessWrong.
I'm surprised and saddened that there aren't more posts like this one around (prediction markets are one thing; loud, public bets on carefully written LW posts are another).
Having something like this occur every ~month seems important from the standpoint of keeping the garden on its toes and remind everyone that beliefs must pay rent, possibly in the form of PayPal cash transfers.
I wrote this after watching Oppenheimer and noticing with horror that I wanted to emulate the protagonist in ways entirely unrelated to his merits. Not just unrelated but antithetical: cargo-culting the flaws of competent/great/interesting people is actively harmful to my goals! Why would I do this!? The pattern generalized, so I wrote a rant against myself, then figured it'd be good for LessWrong, and posted it here with minimal edits.
I think the post is crude and messily written, but does the job.
Meta comment: I notice I'm surprised that out ...
I think you're right, but I rarely hear this take. Probably because "good at both coding and LLMs" is a light tail end of the distribution, and most of the relative value of LLMs in code is located at the other, much heavier end of "not good at coding" or even "good at neither coding nor LLMs".
(Speaking as someone who didn't even code until LLMs made it trivially easy, I probably got more relative value than even you.)
need any help on post drafts? whatever we can do to reduce those trivial inconveniences
I'm very pro- this kind of post. Whatever this is, I think it's important for ensuring LW doesn't get "frozen" in a state where specific objects are given higher respect than systems. Strong upvoted.
I think you could get a lot out of adding a temporary golden dollar sign with amount donated next to our LW names! Upon proof of donation receipt or whatever.
Seems like the lowest hanging fruit for monetizing vanity— benches being usually somewhat of a last resort!
(The benches seem still underpriced to me, given expected amount raised and average donation size in the foreseeable future).
I've been at Sciences Po for a few months now. Do you have any general advice? I seem to have trouble taking the subjects seriously enough to any real effort in them, which you seem to point out as a failure mode you skirted. Asking as many people I can for this, as I'm going through a minor existential crisis. Thanks!
Yeah that'd go into some "aesthetic flaws" category which presumably has no risk of messing with your rationality. I agree these exist. And I too am picky.
I agree about the punchline. Chef's kiss post
Good list!
I personally really like Scott Alexander's Presidential Platform, it hits the hilarious-but-also-almost-works spot so perfectly. He also has many Bay Area house party stories in addition to the one you link (you can find a bunch (all?) linked at the top of this post). He also has this one from a long time ago, which has one of the best punchlines I've read.
Can I piggy-back off your conclusions so far? Any news you find okay?
Well then, I can update a little more in the direction not to trust this stuff.
Ah right, the decades part--I had written about the 1930 revolution, commune, and bourbon destitution, then checked the dates online and stupidly thought "ah, it must be just 1815 then" and only talked about that. Thanks
"second" laughcries in french
Ahem, as one of LW's few resident Frenchmen, I must interpose to say that yes, this was not the Big Famous Guillotine French revolution everyone talks about, but one of the ~ 2,456^2 other revolutions that went on in our otherwise very calm history.
Specifically, we refer to the Les Mis revolution as "Les barricades" mostly because the people of Paris stuck barricades everywhere and fought against authority because they didn't like the king the other powers of Europe put into place after Napoleon's defeat. They failed that time, but succeeded 15 years...
Do we know what side we're on? Because I opted in and don't know whether I'm East or West, it just feels Wrong. I guess I stand a non-trivial chance of losing 50 karma ahem please think of the daisy girl and also my precious internet points.
Anti-moderative action will be taken in response if you stand in the way of justice, perhaps by contacting those hackers and giving them creative ideas. Be forewarned.
Fun fact: it's thanks to Lucie that I ended up stumbling onto PauseAI in the first place. Small world + thanks Lucie.
Update everyone: the hard right did not end up gaining a parliamentary majority, which, as Lucie mentioned, could have been the worse outcome wrt AI safety.
Looking ahead, it seems that France will end up being fairly confused and gridlocked as it becomes forced to deal with an evenly-split parliament by playing German-style coalition negociation games. Not sure what that means for AI, except that unilateral action is harder.
For reference, I'm an ex-high school student who just got to vote for the first 3 times in his life because of French political turmoi...
I'm working on a non-trivial.org project meant to assess the risk of genome sequences by comparing them to a public list of the most dangerous pathogens we know of. This would be used to assess the risk from both experimental results in e.g. BSL-4 labs and the output of e.g. protein folding models. The benchmarking would be carried out by an in-house ML model of ours. Two questions to LessWrong:
1. Is there any other project of this kind out there? Do BSL-4 labs/AlphaFold already have models for this?
2. "Training a model on the most dangerous pa...
I'm taking this post down, it was to set up an archive.org link as requested by Bostrom, and no longer serves that purpose. Sorry, this was meant to be discreet.
Poetry and practicality
I was staring up at the moon a few days ago and thought about how deeply I loved my family, and wished to one day start my own (I'm just over 18 now). It was a nice moment.
Then, I whipped out my laptop and felt constrained to get back to work; i.e. read papers for my AI governance course, write up LW posts, and trade emails with EA France. (These I believe to be my best shots at increasing everyone's odds of survival).
It felt almost like sacrilege to wrench myself away from the moon and my wonder. Like I was ruining a moment of poetr...
Too obvious imo, though I didn't downnvote. This also might not be an actual rationalist failure mode; in my experience at least, rationalists have about the same intuition all the other humans have about when something should be taken literally or not.
As for why the comment section has gone berserk, no idea, but it's hilarious and we can all use some fun.
FHI at Oxford
by Nick Bostrom (recently turned into song):
the big creaky wheel
a thousand years to turn
thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent
yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit
like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes
and why not base it here?
our spandex costumes
blend in wi...
I've come to think that isn't actually the case. E.g. while I disagree with Being nicer than clippy, it quite precisely nails how consequentialism isn't essentially flawless:
I haven't read that post, but I broadly agree with the excerpt. On green did a good job imo in showing how weirdly imprecise optimal human values are.
It's true that when you stare at something with enough focus, it often loses that bit of "sacredness" which I attribute to green. As in, you might zoom in enough on the human emotion of love and discover that it's just an endless ti...
Interesting! Seems like you put a lot of effort into that 9,000-word post. May I suggest you publish it in little chunks instead of one giant post? You only got 3 karma for it, so I assume that those who started reading it didn't find it worth the effort to read the whole thing. The problem is, that's not useful feedback for you, because you don't know which of those 9,000 words are presumably wrong. If I were building a version of utilitarianism, I would publish it in little bursts of 2-minute posts. You could do that right now with a single section of your original post. Clearly you have tons of ideas. Good luck!
You know, I considered "Bob embezzled the funds to buy malaria nets" because I KNEW someone in the comments would complain about the orphanage. Please don't change.
Actually, the orphanage being a cached thought is precisely why I used it. The writer-pov lesson that comes with "don't fight the hypothetical" is "don't make your hypothetical needlessly distracting". But maybe I miscalculated and malaria nets would be less distracting to LWers.
Anyway, I'm of course not endorsing fund-embezzling, and I think Bob is stupid. You're right in that failu...
Link is broken
Re: sociology. I found a meme you might enjoy, which would certainly drive your teacher through the roof: https://twitter.com/captgouda24/status/1777013044976980114
Yeah, that's an excellent idea. I often spot typos in posts, but refrain from writing a comment unless I collect like three. Thanks for sharing!
A functionality I'd like to see on LessWrong: the ability to give quick feedback for a post in the same way you can react to comments (click for image). When you strong-upvote or strong-downvote a post, a little popup menu appears offering you some basic feedback options. The feedback is private and can only be seen by the author.
I've often found myself drowning in downvotes or upvotes without knowing why. Karma is a one-dimensional measure, and writing public comments is a trivial inconvience: this is an attempt at middle ground, and I expect it to ...
Strong upvote, but I won't tell you why.
Can confirm. Half the LessWrong posts I've read in my life were read in the shower.