QiaochuYuan tweets:
> twitter did something amazing with its design: on most other platforms there are “posts” and “replies,” and replies are second-class citizens, lacking most of the affordances that posts have
> on twitter everything is a tweet! (ignoring articles) when you reply or QT a tweet you are writing another tweet, which has all the affordances a full tweet has. you can attach images (including screenshots), you can QT while replying, other people can reply or RT or QT your tweet, replies and QTs show up in feeds. this makes twitter “fully ...
We did have the Popular Comments section. For some reason I didn't like the popular comments section – it often was showing comments that were kind of tribal.
FYI, I haven't actually gotten the UI to be that great yet, but, I've tried out version of the site that renders comments and posts together, clustered by time period:
https://baserates-test-git-comment-and-posts-together-lesswrong.vercel.app/all
In practice, atm, the "Today" section has a fair number of comments (because the posts haven't been super upvoted yet). Then they mostly shift to posts (and i...
I continue to think Hantavirus becoming a pandemic this year is much less than 5%. There’s a chance the opportunity to make 10x gains on the prediction market leads a WHO insider to falsely characterize it as a pandemic in what technically counts as an “official communication.”
Andes virus, the strain of hantavirus that provoked the prediction market, is endemic in two countries. It causes hundreds of cases per year. The strain that infected the Hondius appears to be wild type. In other words, unlike COVID-19, it doe not have new, recent mutations that make...
Do you have examples of prominent people on LessWrong believing it's greater than 5%?
While I don't have aphantasia, I resonated oddly strongly with Mozilla cofounder Blake Ross' description of his discovery that he was aphantasic:
...And, suddenly, fiction clicks. Paty says I used to worry that “I feel like I’m doing reading wrong.” Descriptive language in novels was important to her but impotent to me; I skip it as reflexively as you skip the iTunes Terms of Service. Instead, I scour fiction like an archaeologist: Find the bones.
The slender, olive-skinned man brushed the golden locks out of his hazel eyes. He was so focused on preparing for t
I have no problems with visual imagination, and suspect I am better than average. Reading descriptions in fiction really does paint a picture/movie scene in my head.
At the same time, I think I have a worse episodic memory than most. It sounds like it's not as bad as Ross's, but it's in the same ballpark.
I suspect the following:
Here's a proposed passage of Claude constitution, based on Paul Christiano's integrity for consquentialists.
I think this is a good baseline for how Claude should interact with others:
Would there be an AI safety case for building a 100 GW data center?
Assuming it was possible to build a 100 GW data center in a non-US/China country, could there be a reasonable AI safety case for this? For example, the data center would:
I'm as...
and also to plausibly win, which seems less good to aim for here
I don't think it would be wise or feasible for middle powers to try to 'win' the AI race, nor do I think this is a threat in any meaningful way.
The only 'winning' here that is feasible for middle powers is that both safety is ensured and benefits of AGI are shared fairly. Those are important goals in their own rights and seem like things we should aim for.
Ilya Sutskever testified today at the Musk v. Altman trial.
At the end of the OpenAI lawyer's cross-examination, Judge Yvonne Gonzalez Rogers asked Ilya a few questions herself. I found their exchange (lightly edited for clarity[1]) somewhat amusing:
...YGR: When he [Elon] said that there was a 0% chance of success, what was the technology like at that point?
Ilya: It was less developed, it's true. The technology was less developed.
YGR: Is there any way to quantify what the level was at the time you left?
Ilya: Yes, there is. It's like the difference between... I
IMO it's overdetermined that berating EAs for believing in a bioanchors view is dumb.
from To the Success of our Hopeless Cause: interestingly, a big tension in the Soviet dissident movement was between people who believed in being 100% virtuous, embracing martyrdom, signing their names and addresses onto their dissenting samizdat texts, protesting to be arrested 5 minutes later and sent to jail, pretending that the letter of the Soviet law actually mattered, etc, vs people who believed in being more strategic and openly illegal and trying to avoid being caught. the former fades in importance because they keep getting arrested (the 1968 red square protest being tbe turning point).
where did i include an unmarked quotation? nothing in my post is an explicit quotation from the book.
Epistemic Status: Strong opinion loosely held. Haven't ran numbers or considered corporate espionage (e.g. accidental collisions of certain satellites that were giving africans just enough internet to go on TBA). Unsure if the time horizons of rapid industrialization betray the slowest timelines for venture-backed startups (occam's razor suggests this).
There's a loosely held argument that frontier labs ought to focus some efforts towards debatably 'woke' causes that, if leveraged appropriately (load-bearing sufficient assumption), may fast-track their path...
I think primary crux:
You and I may have differing opinions on the origins of poverty internationally, but even the most optimistic humanist would not expect targeted industrial policy within the poorest decile of third world states to yield astounding gains in disposable income within the next 5 years.
And within t...
conlang idea: an extremely easy to learn language with the following attributes:
only kinda? toki pona is trying to have as small a fundamental vocab as possible and constructing all other concepts using those few words. whereas i am totally happy to import all of English as prerequisite knowledge
Putnam showed that any physical system implements any FSA under a suitable mapping. This is bad for computationalism: if any rock implements any computation, "this system is computing X" stops meaning anything.
The standard response (Chalmers) is that real implementations need to handle counterfactual inputs correctly, not just the actual ones. I think this misses the more obvious problem with Putnam's construction.
Think about how a CPU actually maps to bits. You look at specific physical regions, count electrons via voltage, threshold the count. That mappi...
Opus 4.7 is extremely excited about research into LLMs. Way more excited about research than about coding in general. Here are some quotes:
Holy shit. Wait, let me get the headline table — there's a real signal at σ=0.01.If the MVE works, the followup that turns this into a NeurIPS-ready story
That's a real research direction and it makes structural sense.
The pattern I see is claiming many many times about things being publishable, interesting etc.
Was Claude specifically designed to be super into research LLMs? Possible. But maybe... he is just curios abo...
The whole point of using an AI coding agent is to reduce the amount of effort involved in coding. If you have to jump through hoops to ensure that your usage never shows up on a company IP, is it actually worth the effort (especially since OpenAI has their own models)? Plus a lot of people won't pay out of pocket just to help the company (even if it would obviously be worth it), and the company would have to consider legal risks to intentionally violating the ToS.
To be honest, I wouldn't be surprised by xAI working around this, but only because they're not a serious competitor. OpenAI has their own models and doesn't need to do this.
2026-04-29
Disclaimer
Main
xAi seems like they are dropping out of the frontier AGI race.
"xAI will be dissolved as a separate company, so it will just be SpaceXAI, the AI products from SpaceX." (Musk on X, May 6)
I agree this sounds plausible and appreciate with the distinction you're drawing between still competing for the future, but no longer racing for the software intelligence explosion AGI.
There's approximately 3 attitudes towards AGI Safety:
I encounter 2. a bunch as a local group organiser. I think often it's because people think there's too much uncertainty or too little leverage they have such that they're bet...
Generally I think it's good for people to try to be as accurate as possible and that's what I most associate with 3.2. That said these clusters are a big oversimplification and I think there's people with good reasoning that end up in 3.3. And in practice 3.3 might be a reasonable attitude just from a perspective of a sane society wanting to address the alignment is hard and orgs are incapable scenario anyway
links 5/12/26: https://roamresearch.com/#/app/srcpublic/page/05-12-2026
Excerpts from research notes on AI persuasion/AI superpersuasion
Cognitive exploits are an as-yet theoretical mechanism where relatively short strings or sensory inputs can one-shot someone and cause them to take almost arbitrary actions. In the earlier taxonomy, this is like “content-agnostic persuasion” on steroids, since it really doesn’t care about the content of the message at all.
Put another way, cognitive exploits are specific attacks on human neurology akin to adversarial examples or jailbreaks in ML. In yet another s...
Should the same short string work on everyone, or is it okay to make a different string for each person?
I don't think that universal strings would work, but a personalized string which reminds the person of some emotionally sensitive things in their life could push them towards some outcome. Probably would need at least a few sentences.