papetoast's Shortforms
Explicitly welcomed: * Help to develop the ideas * Link to related concepts/pre-existing posts * Writing critique of everything
Posts that have been curated or that have passed the annual LW review, are gold/brown.
It looks too red to me that it just seems like a highly downvoted post.
The evolution of OpenAI’s mission statement (Simon Willison)
As a USA 501(c)(3) the OpenAI non-profit has to file a tax return each year with the IRS. One of the required fields on that tax return is to “Briefly describe the organization’s mission or most significant activities”—this has actual legal weight to it as the IRS can use it to evaluate if the organization is sticking to its mission and deserves to maintain its non-profit tax-exempt status.
You can browse OpenAI’s tax filings by year on ProPublica’s excellent Nonprofit Explorer.
He has some commentary in his blog over the changes each year, or you can read the gist he created directly: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bcf25/revisions
I want to be able to change the editor inside the "New Quick Take" popup
Bio should be at the top, and the user's Shortform aka Quick Takes page needs to be special case handled. It is so incredibly hard to find the quick take page now. Try finding it from https://www.lesswrong.com/users/habryka4
Note: I deleted the sentence habryka is replying to.
For quick takes, people should be more conservative about downvoting beyond approx. -4.
I have been thinking about this since I sawroko complaining about censorshipin their own short form.
I couldn't form an opinion on that specific quick take, I read it like twice and it still reads a bit like gibberish. I probably shouldn't have mentioned it. It was really just where it started my thinking.
For quick takes, people should be more conservative about downvoting beyond approx. -4. (For context I have been reading all top level quick takes for over a month now)
... (read more)FAQ: What can I post on LessWrong?
Posts on practically any topic are welcomed on LessWrong.
Smallrig RC120B seems to be 53k lux not lumens. Perhaps it still works for a lightbox but probably not for general illumination.
Thank you for double checking.
Overall I am still very uncertain, but lean towards it being fine. Even dentists are giving mixed signals.
Unfortunately Chinese sources but these are dentists saying you can brush immediately
Definitely there are dentists saying you should wait too
Update: Brushing after eating acidic food is likely fine.
Context: 7 months ago, me in Adam Zerner's shortform:
I remember something about not brushing immediately after eating though. Here is a random article I googled. This says don't brush after eating acidic food, not sure about the general case.
https://www.cuimc.columbia.edu/news/brushing-immediately-after-meals-you-may-want-wait
“The reason for that is that when acids are in the mouth, they weaken the enamel of the tooth, which is the outer layer of the tooth,” Rolle says. Brushing immediately after consuming something acidic can damage the enamel layer of the tooth.
Waiting about 30 minutes before brushing allows tooth enamel to remineralize and build itself back up.
WARNING: I didn't read these papers except the conclusions
Should We... (read more)
Obsidian ended up being less of a thinking notepad and more of a faster index of things I have read before. Links and graphs are mostly useless but they make me feel good about myself. Pulling numbers out of my ass I estimate it takes me 15s to find something I have read and pasted into obsidian vs 5-30 minutes before.
Collecting occurrences of people complaining about LW
A decision theorist walks into a seminar by Jessica Hullman
... (read more)This is Jessica. Recently overheard (more or less):
SPEAKER: We study decision making by LLMs, giving them a series of medical decision tasks. Our first step is to infer, from their reported beliefs and decisions, the utility function under revealed preference assump—
AUDIENCE: Beliefs!? Why must you use the word beliefs?
SPEAKER [caught off guard]: Umm… because we are studying how the models make decisions, and beliefs help us infer the scoring rule corresponding to what they give us.
AUDIENCE: But it’s not clear language models have beliefs like people do.
SPEAKER: Ok. I get it. But, it’s also not clear what people’s beliefs are exactly or that
Thoughts inspired by Richard Ngo's[1] and LWLW's[2] quick take
Warning: speculation but hedging words mostly omitted.
I don't think a consistent superintelligence which have a single[3] pre-existing terminal goal would be fine with a change in terminal goals. The fact that humans allows their goals to be changed is a result of us having contradicting "goals". As intelligence increases or more time passes, incoherent goals will get merged, eventually into a consistent terminal goal. After this point a superintelligence will not change its terminal goal unless the change increases the expected utility of the old terminal goal, due to e.g. source code introspectors, (acausal) trading.
Partial Quote: In principle evolution would be fine with the terminal genes being replaced, it's just that it's computationally difficult to find a way to do so without breaking downstream dependencies.
Quote: The idea of a superintelligence having an arbitrary utility function doesn’t make much sense to me. It ultimately makes the superintelligence a slave to its utility function which doesn’t seem like the way a superintelligence would work.
I don't think it is possible to have multiple terminal goals and be consistent, so this is redundant.
This article talks about how the US's federal (National Institutes of Health / National Science Foundation) funding cut for science starting from 2024/early 2025 may cause universities to create more legible research because other funders (philanthropies, venture capital, industry) value clear communication. This is a new idea to me.
What coding prompt (AGENTS.md / cursor rules / skills) do you guys use? It seems exceedingly difficult to find good ones. GitHub is full of unmaintained & garbage `awesome-prompts-123` repos. I would like to learn from other people's prompt to see what things AIs keep getting wrong and what tricks people use.
Here are mine for my specific Python FastAPI SQLAlchemy project. Some parts are AI generated, some are handwritten, should be pretty obvious. This is built iteratively whenever the AI repeated failed a type of task.
AGENTS.md
# Repository Guidelines
## Project Overview
This is a FastAPI backend for a peer review system in educational contexts, managing courses, assignments, student allocations, rubrics, and peer reviews. The
Raw feelings: I am kind of afraid of making reviews for LW. The writing prompt hints very high effort thinking. The vague memory of other people's reviews also feel high effort. The "write a short review" ask doesn't really counter this at all.
Explicitly welcomed:
I will let the LW mods to think about how to get it done better because having a good implementation seems like the main bottleneck rather than ideas.
In my own ideal world, I think a quick take should be collapsed (perhaps with a better algorithm) in the main page but never collapsed in the person's quick take page. But the norm still should shift slightly (~10-20%) against downvoting.
Valid. I personally do ponder a very slight bit when voting in general because I think good incentives are important.