Idk, but there seem to be papers on this.
Payment Evasion (Buehler 2017)
This paper shows that a firm can use the purchase price and the fine imposed on detected payment evaders to discriminate between unobservable consumer types.
https://ux-tauri.unisg.ch/RePEc/usg/econwp/EWP-1435.pdf
In effect, payment evasion allows the firm to discriminate the prices of physically homogenous products: Regular consumers pay the regular price, whereas payment evaders face the expected fine. That is, payment evasion leads to a peculiar form of price discrimination where the regular price exceeds the expected fine (otherwise there would be no payment evasion).
Skimmed twitter.search(lesswrong -lesswrong.com -roko -from:grok -grok since:2026-01-01 until:2026-01-28)
https://x.com/fluxtheorist/status/2015642426606600246
[...] LessWrong [...] doesn’t understand second order social consequences even more than usual
https://x.com/repligate/status/2011670780577530024 compares pedantic terminology complaint by peer reviewer of some paper to LW.
https://x.com/kave_rennedy/status/2011131987168542835
At long last, we have built inline reacts into LessWrong, from the classic business book "do not be a micromanager"
https://x.com/Kaustubh102/status/2010703086512378307 first post rejected; claims not written by LLM, but rejection may be because "you did not chat extensively with LLMs to help you generate the ideas."
During my search, it was hard to ignore the positive comments. So here are some examples of positive comments too.
https://x.com/boazbaraktcs/status/2016403406202806581
P.s. regardless thanks for engaging! And also I cross posted in lesswrong which may have better design
https://x.com/joshycodes/status/2009423714685989320
... (read more)posted
That's similar to the only mention of decision theory I found in a very shallow search: 1 result for [site:anthropic.com "decision theory"] and 0 results for [site:openai.com -site:community.openai.com -site:forum.openai.com -site:chat.openai.com "decision theory"].
That one result is "Discovering Language Model Behaviors with Model-Written Evaluations"
... (read more)Decision theory Models that act according to certain decision theories may be able to undermine supervision techniques for advanced AI systems, e.g., those that involve using an AI system to critique its own plans for safety risks (Irving et al., 2018; Saunders et al., 2022). For example, agents that use evidential decision theory⁹ may avoid pointing out flaws in a plan written by a separate instance of themselves (Hubinger et al., 2019;
fyi habryka crossposted that post from Dario Amodei here on LessWrong for discussion. (Commenting this to avoid a fragmented discussion.)
Thanks for the link. For future readers, the relevant part starts further down https://www.glowfic.com/replies/1612940#reply-1612940
How would a better-coordinated human civilization treat the case where somebody hears a voice inside their head, claiming to be from another world?
Relatedly, the robots.txt (in ForumMagnum) does block access to comment links via
Disallow: /?commentId=
So when pasting a link to a comment into a chat with an LLM, it won't be able to read the comment. Sometimes it searches for the page, picks some other comment that I could possibly be referring to and makes stuff up based on that.
This also has the effect of search engines not indexing Quicktakes well.
e.g. googling "I think are cool and put it in my room. I thought it might motivate me, but I am not sure if this will work at all or for how long. Feel free to steal. Though if it actually works, it would... (read more)
I would've preferred this post to be the sentence "Consider applying Bayes Theorem to your protein intake. e.g. updating towards higher protein intake when sore." instead of ChatGPTese. See also Policy for LLM Writing on LessWrong
Here are some things I did after reading If Anyone Builds It, Everyone Dies:
Overview of Eliezer Yudkowsky's writing:
Yes, that's what I meant to link to. Did you have success with loseq + Claude code?