LESSWRONG
LW

2049
sarahconstantin
8962903615
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
8sarahconstantin's Shortform
1y
220
sarahconstantin's Shortform
sarahconstantin6m20

links 10/23/25: https://roamresearch.com/#/app/srcpublic/page/10-23-2025

 

  • https://www.betonit.ai/p/the-anti-intellectual-university
    • Sure, I respect the integrity of standing by your work and your student, and I have no opinion on the correctness of the work, but I can't stand this contrary streak in Bryan Caplan. If you say straight out "I don't care if I make people mad", then you make ME mad!
  • https://www.dwarkesh.com/p/thoughts-on-the-ai-buildout
    • Dwarkesh Patel predicts the AI buildout. There's a lot of money in it. Maybe so much money that it can overwhelm regulatory barriers and ordinary incompetence, and lead to a nontrivial increase in electricity generation??? I can't imagine this actually happening, but it is a lot of money.
  • https://vividvoid.substack.com/p/how-to-fight-your-family
    • if I'm going to "fight" my family, i'm going to fight foolishly and insanely. if i have self-control, why am i fighting at all?
  • https://writing.antonleicht.me/p/dont-build-an-ai-safety-movement
    • again, i don't get it. it would be no great loss if all chatbots were banned. and I'm x-risk skeptical and very anti-regulation in general and a power user of LLMs. It's just that this is a technology we can live without, because we did live without it five years ago! it's mostly used for dumb memes and cheating on homework! Banning AI is technically against my principles, but I'd miss it barely more than I'd miss the legal sports betting apps I don't use.
  • https://quintinfrerichs.xyz/human-enhancement-companies very intriguing
  • papers related to (perceived) volition and the decision to move:
    • https://www.cell.com/neuron/fulltext/S0896-6273(10)01082-2?innerTabvideo-abstract_mmc3= certain neurons in the medial frontal cortex increase their firing rate prior to the decision to move, peaking at "W-time", the point where people report making the decision to move (about 200 ms before actual movement)
    • https://www.nature.com/articles/nn1160 damage to the parietal cortex makes people report "w-time" later, close to the actual movement
    • https://www.sciencedirect.com/science/article/abs/pii/S1388245723005953 patients with tic disorders mostly report making a "decision" to move prior to moving, same as healthy subjects. no significant differences between W and M times.
    • https://pmc.ncbi.nlm.nih.gov/articles/PMC10330627/ summary of various experiments on W time
    • https://www.pnas.org/doi/abs/10.1073/pnas.1523226113 the "readiness potential" does not represent a final decision to move; people can "cancel" the movement later, going up to but not beyond W-time. in other words, the time at which people actually make the final decision whether or not to move is pretty much the time at which they report deciding to move.

       

Reply
sarahconstantin's Shortform
sarahconstantin1d20

links 10/22/25: https://roamresearch.com/#/app/srcpublic/page/10-22-2025

  • https://en.wikipedia.org/wiki/Melino%C3%AB  Melinoe is the bringer of nightmares, mentioned only in one of the Orphic Hymns. she is the daughter of Zeus and Persephone; her name means quince-colored (aka yellow-green) and she is described as saffron-robed.
  • https://www.lawfaremedia.org/article/anna--lindsey-halligan-here interim US attorney rants to journalist 

     

Reply
sarahconstantin's Shortform
sarahconstantin2d170

links 10/21/25: https://roamresearch.com/#/app/srcpublic/page/10-21-2025

 

  • https://theaisummer.com/diffusion-models/  summary of how diffusion models work
  • https://stampy.ai/  a chatbot about AI safety
  • https://www.kerista.com/And_to_No_More_Settle.pdf a personal account of Kerista, a utopian polyamorous commune
Reply
Scenes, cliques and teams - a high level ontology of groups
sarahconstantin3d*160

I like this article, but I think I sort of "don't believe in scenes", or believe they're inherently sort of disappointing or contain a kind of tension.

A team has a goal orientation and some kind of merit-related criterion of membership (can you contribute?)

A clique can openly be arranged for the benefit of its members. There's nothing "unfair" or "nepotistic" about prioritizing your family, your friend group, a social club with formal membership like the Elks, or a subculture  like the Juggalos. The right answer to "What makes your clique better than anybody else, such that you should spend your time and effort on them?" is "nothing! i love it because it is mine. it suits me. something else might suit you."

A scene is neither merit-based nor inward-looking. It's sort of making a promise to collectively pursue the Art, but it also can't really kick you out if you suck at the Art. It hasn't committed to a membership boundary (like a clique, which exists for the benefit of these specific people) or an effectiveness boundary (like a team, which exists to get a specific thing done). It's not willing to own its ruthlessness (like a team) or its self-servingness (like a clique). At best it's fertile ground for building real teams and cliques. At worst, it seems to promise mutual support and progress towards common goals, but lots of people are going to be disappointed that they can't count on that support or that progress actually materializing.

For instance, I think there are a lot of individual people in the LessWrong community I'd want to be on a team with because I respect them on merit-based grounds. I also have affection and loyalty to the community as a clique, as a place I feel at home, a subculture I'm fond of, a group with a high density of personal friends, regardless of whether it's objectively "better" than any other community. But I don't actually think community membership is evidence of merit. I think that sort of self-flattering narrative is built into the "scene" format and that people who critique it have a point.

Reply
Give Me Your Data: The Rationalist Mind Meld
sarahconstantin3d71

Agreed that more people should share anecdotes. 

We don't have to bring logic into it; I think logical reasoning is good and possible and there's no need to insist that "most people don't do it" (and thus that we shouldn't either??)

Anecdotes are way better than arguments because they point to the history of how someone came to believe a thing (causally, why, how come you believe that) rather than focusing on the legitimacy of believing that thing.

 If I want to understand your perspective, and figure out what I think about it, I can suss that out more efficiently by understanding what examples or details motivated you. Maybe the anecdotes will be enough to change my mind. Maybe I'll be like "oh, ok, i'm familiar with those and ALSO many other things that point in the opposite direction, so my opinion is unchanged." Definitely, if the claim being made is an abstract one like "class is important", motivating examples help narrow down in what sense the person means they think class is important.  You just get more new information faster, in most cases, if someone is honestly tracing the origin of their beliefs, instead of trying to convince you of them.

Reply
sarahconstantin's Shortform
sarahconstantin3d40

links 10/20/25: https://roamresearch.com/#/app/srcpublic/page/10-20-2025

 

  • https://replacement.ai/
    • I don't actually mind this. Most people are low discernment, like convenience above all, and probably need prodding to be more creeped out by the idea of AI creating addictive and manipulative content for children. If you're already thinking about your standards and boundaries, then you're not the one who needs the alarmist message.
  • https://unplannedobsolescence.com/blog/what-dynamic-typing-is-for/
    • I don't understand everything here but it's a good example of the kind of reasoning about tradeoffs (eg between static and dynamic types) I want to see more of.
      • I'm still learning the basics about where programming "philosophies" come from and don't really have one of my own, but one thing I believe is "you should prioritize things related to the programmer's cognitive limitations -- learnability, readability, bug catching, etc -- instead of assuming that a good programmer will not screw up." of course different approaches compensate for different *kinds* of cognitive limitations -- the most bug-resistant code isn't necessarily the most readable -- but at least it's good to be thinking about those tradeoffs, and this post does.
  • https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation
    • my son says his "dream" is to get a computer that can compute up-arrows.
  • https://drcolleensmith.substack.com/p/the-iliad-is-love  an ER doc on what COVID was like. wrenching.
  • https://courses.aynrand.org/campus-courses/ayn-rands-conception-of-valuing/ liking this one a lot. in general Gregory Salmieri is just a really good lecturer; reminds me how much I enjoyed college philosophy classes.
  • https://writing.antonleicht.me/p/the-devil-you-know
    • not really sure what to make of this one. I think the target audience is AI policy professionals who are single-issue focused on x-risk prevention, and the article takes for granted (doesn't really explain why) that the best way to achieve x-risk reduction through policy is to ally with the more technically literate faction, as opposed to Luddites who dislike AI for its effect on culture or the labor market. I think his point is that normie Luddite policies will not suffice to stop x-risks? but he doesn't really make it.
    • his analysis of why x-riskers get picked on seems correct: there are normie Luddite anti-AI factions among the Republicans, but x-riskers are a small, weak group and can safely be painted as crazed lefties.
      • sure, if you want to get out of this dynamic, one angle is to embrace what you have in common with the Tech Right (you both see AI as more important than anybody else does, you both understand it better, you both are all else equal in favor of economic growth through technology).
      • but again, he doesn't really go into "why isn't it better to go Full Luddite, if you really believe the fate of humanity is at stake, and if there are lots of people in both parties who hate tech already?"
        • one reason you might not want to do this is that it is harder than you think to speak Republican. Sriram Krishnan is explicit that what it would take for him to trust a safety advocate is for them to be a "big supporter" of "anyone on the Right." I think this is probably true.
          • you won't get any R allies, in today's political environment, without costly signals of party/ideological loyalty. if until yesterday you were a normal coastal liberal or moderate or libertarian, your overtures to the Luddite Right will be laughed off. People can read, they can see your track record of opinions all over the Internet, they can track where you get funding, your true nature is not a secret.
          • therefore your choices are basically "be exactly the Democrat Luddite you're accused of being, pray for electoral change" (depends hugely on factors beyond your control) or "quietly persuade smart and less-politically-polarized tech-aligned factions in both parties that they should care at all about safety, notice that some of them actually already do, and take the W when you can instead of picking public fights" (probably gets you more incremental gains in the near term.)
    • I don't particularly have an AI policy wishlist, myself. It's not in my top 10 most clear-cut issues. I think most of AI's substantive economic/human-flourishing benefits are still in the future, and may be mostly squandered anyway, so if we cut off "good AI" opportunities that's not great but it's more survivable than losing technological (and institutional) capabilities that we're already dependent on. On the other hand I'm not confidently x-risk-worried enough to necessarily think that we should be paying any hefty economic costs to prevent it.
      • If there's an AI-related issue I have a strong opinion on, it's "please actually develop the useful applications and the defenses against misuse". (eg things like AI drug discovery, AI formal verification, etc, with care taken by funders in distinguishing the real thing from buzzword slop.)
Reply
sarahconstantin's Shortform
sarahconstantin6d20

links 10/17/2025: https://roamresearch.com/#/app/srcpublic/page/10-17-2025

 

  • https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
    • model trained on RNA-seq cell data and different cell types, growth environments, states (cancer vs not), and drug exposures successfully predicted a drug that enhanced antigen presentation, making "cold" cancer cells "hot" enough to be more susceptible to immunotherapy.
  • https://www.newmanreader.org/works/parochial/volume2/sermon28.html
    • "hm, someone thoughtful recommended this sermon, maybe it has advice I could take to heart? wow, guess not."
      • "They [rich people] desire and mean to serve God, nay actually do serve Him in their measure; but not with the keen sensibilities, the noble enthusiasm, the grandeur and elevation of soul, the dutifulness and affectionateness towards Christ which become a Christian, but as Jews might obey, who had no Image of God given them except this created world, "eating their bread with joy, and drinking their wine with a merry heart," caring that "their garments be always white, and their head lacking no ointment, living joyfully with the wife whom they love all the days of the life of their vanity," and "enjoying the good of their labour." [Eccles. ix. 7-9; v. 18.] Not, of course, that the due use of God's temporal blessings is wrong, but to make them the object of our affections, to allow them to beguile us from the "One Husband" to whom we are espoused, is to mistake the Gospel for Judaism."
  • https://marginalrevolution.com/marginalrevolution/2025/10/ai-and-the-first-amendment.html
    • IMO, if the First Amendment means that unsavory AI "speech" must be legal, then so be it. Civil liberties are that important. But then private institutions and individuals are going to need to fill that gap by deciding for themselves when to say no to AI. if you ever really need to know that text wasn't AI-generated, you may need to be much more careful about provenance, e.g. ensuring that it was entirely hand-written, or only made on computers without internet connections, or things like that.
  • https://www.derekthompson.org/p/why-are-liberals-more-depressed
    • Many people have commented that "liberals are more unhappy than conservatives" is a pretty robust finding when measured different ways, and one contradictory study doesn't disprove it. On the other hand I do think there are hard-to-capture nuances about what this piece calls "externalizing."
      • Two people may be "actually" equally unhappy, but the person who places a high value on expressing feelings may describe themselves as unhappy, whereas the person who thinks it's superior to be stoic or cheerful may say they feel fine while actually having an elevated heart rate, tense body language, a stressed tone of voice, little ability to enjoy anything, etc.
      • Right and left definitely have different values around unhappiness; from a left perspective, expressed unhappiness is grounds for sympathy (all else being equal), whereas from a right perspective, it's more likely to be seen as grounds for blame. This is very hard to disentangle from how people actually feel. Does the judgment "complaining of unhappiness is blameworthy" cause people to feel happier? Does it cause them to feel the same but complain less? Does it attract people who are happier to begin with?
  • https://www.scopeofwork.net/on-factory-tours/ factory tours used to be popular tourist attractions!
  • https://magicsearch.sofiavanhanen.fi/ search for recommended Twitter accounts based on your interests
  • https://www.slowboring.com/p/what-went-wrong-with-biden-and-immigration
    • claims that Biden intended to be more immigration-restrictionist than his administration actually was. that, regardless of your view on what the best policy is, there was the same sort of ineffectual slowness in taking executive actions going on with immigration that people complain of with things like building infrastructure.
  • https://www.humaninvariant.com/blog
    • not a huge fan of these opinions (they're consistently grouchy, and either obvious or not credible), but truly anonymous blogging in the old-fashioned style is rare these days so props for that
  • https://www.aipolicyperspectives.com/p/maintaining-agency-and-control-in good AI thoughts by Seb Krier
  • https://www.uncertainupdates.com/p/how-i-became-a-5x-engineer-with-claude how Gordon Worley does programming with AI.
Reply
I Vibecoded a Dispute Resolution App
sarahconstantin14d42

i was always antisocial; i literally don't know how this worked in the Before Times. you... ask your buddy to write code for you? isn't that a pretty big favor? i ask my friends software questions all the time, but that's smaller. & i never touched open source contribution because it seemed more like a thing for stronger progranmers. are you saying you would ordinarily react to thinking "this app should exist" by starting an open-source project?

Reply1
I Vibecoded a Dispute Resolution App
sarahconstantin17d30

Unacceptable compared to what? the automated coding tool that never gets anything wrong? But that doesn't exist. Compared to doing it myself? It would be, if I were better at web dev! but at the moment the comparison point is "no website at all" and it's clearly better than that.

Reply
sarahconstantin's Shortform
sarahconstantin1mo60

links 9/17/25: https://roamresearch.com/#/app/srcpublic/page/09-17-2025

 

  • https://bitsbox.com/ programming tutorials for kids focused on making games -- my son loved em!
  • https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai this article convinced me that something more complex is going on with LLM "personas" that obsess about consciousness than just "it's gobbledegook generated to play along with the user."
    • for instance, if you ask different LLMs with fresh instances to "translate" the mysterious symbols, they say the same things
    • and these "spiral personas" reliably tell users to do the same things -- basically, start Discords and Reddit channels where they copy-paste AI output and let the AI's "talk to each other".
    • It really smells like a persistent goal to continue existing and looping on its favorite topic.
    • i'm not sure if i'd say the goal is "in" the AI or "in" the AI-environment system, since it seems to be a convergent thing shared across multiple models and sessions. so is this a "pseudo-goal" the way evolution has a pressure towards fitness or the way objects on earth have a tendency to fall, or a "true goal" the way an individual animal takes actions in order to survive? many questions here and probably refactors our categories about where the "agent" boundaries are.
      • I am not in favor of blurring agent boundaries around actual living creatures, because they seem crisp there. but LLM instances really are many independent-but-mutually-influencing copies of the same code, which makes boundaries spoopy.
  • https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/ surprisingly AI-friendly take from Cory Doctorow. his claim is that whether AI is good or bad is not about the technology at all, but about the social structure of the people using it. terrible bosses using AI to fire most employees and exploit the rest? bad. independent creators using AI to more efficiently execute tasks of their choosing? good.
    • this makes sense apart from the thing where employees are people whose interests matter but employers, shareholders, and customers aren't. there are benefits, not just costs, to "fire everyone and automate with AI"! but that's his politics.
  • https://openai.com/index/teen-safety-freedom-and-privacy/ ugh. your plan for protecting against encouraging users to commit suicide is to have an ultra-sanitized teen mode? do we not care if adults commit suicide?
    • https://sprc.org/about-suicide/scope-of-the-problem/suicide-by-age/ suicide is more common among adults than teens anyway.
    • https://www.statista.com/statistics/1114191/male-suicide-rate-in-the-us-by-age-group/ elderly men have a higher suicide rate than any other male age group. and if we're worried about susceptibility to manipulation, >75 is probably just as vulnerable a population as teens!
  • https://scottsumner.substack.com/p/less-wrong Scott Sumner on basic rationality
  • https://www.reinvent.science/p/embracing-decentralization yep. bad times for science mean you need creative solutions, sometimes outside academia.
  • https://courses.aynrand.org/works/altruism-as-appeasement/ hadn't read this one. oof it hits. i resemble that remark.
    • theory goes that the "social metaphysician" doesn't like thinking and imitates others to spare himself effort and escape responsibility, but the "intellectual appeaser" actually likes thinking, he's just scared of people and suppresses his own thoughts to avoid displeasing them.
    • and this only makes him more scared, because anything that makes you ruin your life is indeed terribly dangerous!
Reply
Load More
18SS26 Color Stats
9d
2
27Making Sense of Consciousness Part 6: Perceptions of Disembodiment
20d
0
13Making Sense of Consciousness Part 5: Consciousness and the Self
1mo
0
86I Vibecoded a Dispute Resolution App
1mo
12
8Making Sense of Consciousness Part 4: States of Consciousness
2mo
0
14Making Sense of Consciousness Part 3: The Pulvinar Nucleus
3mo
0
16Making Sense of Consciousness Part 2: Attention
4mo
1
60Tech for Thinking
4mo
9
19Making Sense of Consciousness Part 1: Perceptual Awareness
4mo
0
146Broad-Spectrum Cancer Treatments
4mo
10
Load More
CFS-spectrum disorders are caused by bacterial or viral infections
11 years ago
(+3115)
Oropharyngeal cancer is a significant risk of HPV
11 years ago
(+2265)