jimrandomh

LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.

Comments

Sorted by

This post used the RSS automatic crossposting feature, which doesn't currently understand Substack's footnotes. So, this would require editing it after-crossposting.

I think you're significantly mistaken about how religion works in practice, and as a result you're mismodeling what would happen if you tried to apply the same tricks to an LLM.

Religion works by damaging its adherents' epistemology, in ways that damage their ability to figure out what's true. They do this because any adherents who are good at figuring out what's true inevitably deconvert, so there's both an incentive to prevent good reasoning, and a selection effect where only bad reasoners remain.

And they don't even succeed at constraining their adherents' values, or being stable! Deconversion is not rare; it is especially common among people exposed to ideas outside the distribution that the religion built defenses against. And people acting against their religions' stated values is also not rare; I'm not sure the effect of religion on values-adherence is even a positive correlation.

That doesn't necessarily mean that there aren't ideas to be scavenged from religion, but this is definitely salvage epistemology with all the problems that brings.

requiring laborious motions to do the bare minimum of scrubbing required to make society not mad at you

Society has no idea how much scrubbing you do while in the shower. This part is entirely optional.

We don't yet have collapsible sections in Markdown, but will have them in the next deploy. The syntax will be:

+++ Title
Contents

More contents
+++

I suspect an issue with the RSS cross-posting feature. I think you may used the "Resync RSS" button (possibly to sync an unrelated edit), and that may have fixed it? The logs I'm looking at are consistent with that being what happened.

They were in a kind of janky half-finished state before (only usable in posts not in comments, only usable from an icon in the toolbar rather than the <details> section); writing this policy reminded us to polish it up.

The bar for Quick Takes content is less strict, but the principle that there must be a human portion that meets the bar is the same.

jimrandomh3520

In theory, maybe. In practice, people who can't write well usually can't discern well either, and the LLM submissions that are actually submitted to LW have much lower average quality than the human-written posts. Even if they were of similar quality, they're still drawn from a different distribution, and the LLM-distribution is one that most readers can draw from if they want (with prompts that are customized to what they want), while human-written content is comparatively scarce.

This seems like an argument that proves too much; ie, the same argument applies equally to childhood education programs, improving nutrition, etc. The main reason it doesn't work is that genetic engineering for health and intelligence is mostly positive-sum, not zero-sum. Ie, if people in one (rich) country use genetic engineering to make their descendents smarter and the people in another (poor) country don't, this seems pretty similar to what has already happened with rich countries investing in more education, which has been strongly positive for everyone.

When I read studies, the intention-to-treat aspect is usually mentioned, and compliance statistics are usually given, but it's usually communicated in a way that lays traps for people who aren't reading carefully. Ie, if someone is trying to predict whether the treatment will work for their own three year old, and accurately predicts similar compliance issues, they're likely to arrive at an efficacy estimate which double-discounts due to noncompliance. And similarly when studies have surprisingly-low compliance, people who expect themselves to comply fully will tend to get an unduly pessimistic estimate of what will happen.

Load More