LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Load More

Popular Comments

Recent Discussion

Moloch Hasn’t Won
Best of LessWrong 2019

Scott Alexander's "Meditations on Moloch" paints a gloomy picture of the world being inevitably consumed by destructive forces of competition and optimization. But Zvi argues this isn't actually how the world works - we've managed to resist and overcome these forces throughout history. 

by Zvi
470Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
74
13fiddler
This review is more broadly of the first several posts of the sequence, and discusses the entire sequence.  Epistemic Status: The thesis of this review feels highly unoriginal, but I can't find where anyone else discusses it. I'm also very worried about proving too much. At minimum, I think this is an interesting exploration of some abstract ideas. Considering posting as a top-level post. I DO NOT ENDORSE THE POSITION IMPLIED BY THIS REVIEW (that leaving immoral mazes is bad), AND AM FAIRLY SURE I'M INCORRECT. The rough thesis of "Meditations on Moloch" is that unregulated perfect competition will inevitably maximize for success-survival, eventually destroying all value in service of this greater goal. Zvi (correctly) points out that this does not happen in the real world, suggesting that something is at least partially incorrect about the above mode, and/or the applicability thereof. Zvi then suggests that a two-pronged reason can explain this: 1. most competition is imperfect, and 2. most of the actual cases in which we see an excess of Moloch occur when there are strong social or signaling pressures to give up slack.  In this essay, I posit an alternative explanation as to how an environment with high levels of perfect competition can prevent the destruction of all value, and further, why the immoral mazes discussed later on in this sequence are an example of highly imperfect competition that causes the Molochian nature thereof.  First, a brief digression on perfect competition: perfect competition assumes perfectly rational agents. Because all strategies discussed are continuous-time, the decisions made in any individual moment are relatively unimportant assuming that strategies do not change wildly from moment to moment, meaning that the majority of these situations can be modeled as perfect-information situations.  Second, the majority of value-destroying optimization issues in a perfect-competition environment can be presented as prisoners dilemmas: both
[Yesterday]AGI Forum @ Purdue University
[Today]Lighthaven Sequences Reading Group #40 (Tuesday 7/1)
AI Moratorium Stripped From BBB
58
Zvi
7h

The insane attempted AI moratorium has been stripped from the BBB. That doesn’t mean they won’t try again, but we are good for now. We should use this victory as an opportunity to learn. Here’s what happened.

What Happened

Senator Ted Cruz and others attempted to push hard for a 10-year moratorium on enforcement of all AI-specific regulations at the state and local level, and attempted to ram this into the giant BBB despite it being obviously not about the budget.

This was an extremely aggressive move, which most did not expect to survive the Byrd amendment, likely as a form of reconnaissance-in-force for a future attempt.

It looked for a while like it might work and get passed outright, with it even surviving the Byrd amendment, but opposition steadily grew.

We’d...

(Continue Reading – 1681 more words)
Thane Ruthenis9m40

that I discussed in AI #1191

Here's to the world staying around long enough for us to read AI #1191.

Reply
1Wbrom6h
Isn't the problem that any significant AI regulation promulgated by any state is a de facto national regulation due to the nature of the internet? I mean sure age gate AI like porn but you know it's going to be broader than that. 
6MondSemmel5h
As mentioned in the post, Congress is perfectly free to get its act together and do proper legislation, but since they don't actually want to do that*, then it's insane for them to pre-empt the states from doing it. * (E.g. the US Senate, or rather all the 100 individual Senators, could at any time abolish the modern filibuster and actually restore their ability to legislate as a co-equal branch of government, if they ever wanted to. But they don't.)
Lessons from Building Secular Ritual: A Winter Solstice Experiment
2
joshuamerriam
26m

 

This is a follow-up to my earlier post about designing a Winter Solstice gathering that combined Rationalist Solstice traditions with local Māori Matariki practices. Here's what I learned from actually running the event.

TL;DR: People wanted structured conversation more than curated performance. Starting with collective acknowledgment of loss made subsequent vulnerability feel natural. Social coordination mechanics are harder than they look, but small-scale practice matters for larger coordination challenges.

What I Was Trying to Solve

Growing up in a religious family, I personally wasn't getting the meaningful aspects of seasonal gatherings which I fondly remember from my childhood. Living in New Zealand, I wanted to create something that honored both Rationalist Solstice traditions and local Matariki practices without falling into either cultural appropriation or forcing cringy fake rituals on people.

My...

(Continue Reading – 1081 more words)
Proposal for making credible commitments to AIs.
84
Cleo Nardo
4d

Acknowledgments: The core scheme here was suggested by Prof. Gabriel Weil.

There has been growing interest in the dealmaking agenda: humans make deals with AIs (misaligned but lacking decisive strategic advantage) where they promise to be safe and useful for some fixed term (e.g. 2026-2028) and we promise to compensate them in the future, conditional on (i) verifying the AIs were compliant, and (ii) verifying the AIs would spend the resources in an acceptable way.[1]

I think the dealmaking agenda breaks down into two main subproblems:

  1. How can we make credible commitments to AIs?
  2. Would credible commitments motivate an AI to be safe and useful?

There are other issues, but when I've discussed dealmaking with people, (1) and (2) are the most common issues raised. See footnote for some other issues in...

(See More – 514 more words)
Nina Panickssery41m20

List of names is alphabetically ordered except for 14, 15, 16

Presumably a hidden message for the AI reading this :D

Reply
1Raphael Roche3h
Thank you for this contribution. It's important to remember that legal personhood doesn't exclude legal representation—quite the opposite. All juridical persons, such as corporations, have legal representatives. Minors and adults under legal protection are natural persons but with limited legal capacity and also require legal representatives. Moreover, most everyone ultimately ends up represented by an attorney—that is, a human representative or proxy. The relationship between client and attorney also relies heavily on trust (fides). From this perspective, the author's proposal seems like a variation on existing frameworks, just without explicit legal personhood. However, I believe that if such a system were implemented, legal doctrine and jurisprudence would likely treat it as a form of representation that implies legal personhood similar to that of minors or corporations, even without explicit statutory recognition. That said, I'm not convinced it makes much difference whether we grant AI legal representation with or without formal legal personhood when it comes to the credibility of human commitments. Either way, an AI would have good reason to suspect that a legal system created by and for humans, with courts composed of humans, wouldn't be fair and impartial in disputes between an AI (or its legal representative) and humans. Just as I wouldn't be very confident in the fairness and impartiality of an Israeli court applying Israeli law if I were Palestinian (or vice versa)—with all due respect to courts and legal systems. Beyond that, we may place excessive faith in the very concept of legal enforcement. We want to view it as a supreme principle. But there's also the cynical adage that "promises only bind those who believe in them"—the exact opposite of legal enforcement. Which perspective is accurate? Since legal justice isn't an exact science or a mechanical, deterministic process with predictable outcomes, but rather a heuristic and somewhat random process
1jsnider34h
I like the idea of making deals with AI, but trying to be clever and make a contract that would be legally enforceable under current law and current governments makes it too vulnerable to fast timelines. If a human party breached your proposed contract, AI takeover will likely happen before the courts can settle the dispute. An alternative that might be more credible to the AI is to make the deal directly with it, but explicitly leave arbitrating and enforcing contract disputes to a future (hopefully aligned) ASI. This would ground the commitment in a power structure the AI might find more relevant and trustworthy than a human legal system that could soon be obsolete.
2eggsyntax7h
Although this isn't a topic I've thought about much, it seems like this proposal could be strengthened by, rather than having the money be paid to persons Pis, having the money deposited with an escrow agent, who will release the money to the AI or its assignee upon confirmation of the conditions being met. I'm imagining that Pis could then play the role of judging whether the conditions have been met, if the escrow agent themselves weren't able to play that role. The main advantage is that it removes the temptation that Pis would otherwise have to keep the money for themselves. If there's concern that conventional escrow agents wouldn't be legally bound to pay the money to an AI without legal standing, there are a couple of potential solutions. First, the money could be placed into a smart contract with a fixed recipient wallet, and the ability for Pis to send a signal that the money should be transferred to that wallet or returned to the payer, depending on whether the conditions have been met. Second, the AI could choose a trusted party to receive the money; in this case we're closer to the original proposal, but with the judging and trusted-recipient roles separated. The main disadvantage I see is that payers would have to put up the money right away, which is some disincentive to make the commitment at all; that could be partially mitigated by having the money put into (eg) an index fund until the decision to pay/return the money was made.
On The Formal Definition of Alignment
2
AynonymousPrsn123
1h

I want to retain the ability to update my values over time, but I don’t want those updates to be the result of manipulative optimization by a superintelligence. Instead, the superintelligence should supply me with accurate empirical data and valid inferences, while leaving the choice of normative assumptions—and thus my overall utility function and its proxy representation (i.e., my value structure)—under my control. I also want to engage in value discussions (with either humans or AIs) where the direction of value change is symmetric: both participants have roughly equal probability of updating, so that persuasive force isn’t one-sided. This dynamic can be formally modeled as two agents with evolving objectives or changing proxy representations of their objectives, interacting over time.

That's what alignment means to me: normative freedom

...
(See More – 155 more words)
Foom & Doom 1: “Brain in a box in a basement”
206
Steven Byrnes
Ω 668d

1.1 Series summary and Table of Contents

This is a two-post series on AI “foom” (this post) and “doom” (next post).

A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. “brain in a box in a basement”), via . Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today’s, a world utterly unprepared for this...

(Continue Reading – 8630 more words)
Eli Tyre2h20

because cortex is incapable to learn conditioned response, it's an uncontested fiefdom of cerebellum

What? This isn't my understanding at all, and a quick check with an LLM also disputes this.

Reply
2Stephen McAleese3h
I think it depends on the context. It's the norm for employees in companies to have managers though as @Steven Byrnes said, this is partially for motivational purposes since the incentives of employees are often not fully aligned with those of the company. So this example is arguably more of an alignment than a capability problem. I can think of some other examples of humans acting in highly autonomous ways: * To the best of my knowledge, most academics and PhD students are expected to publish novel research in a highly autonomous way. * Novelists can work with a lot of autonomy when writing a book (though they're a minority). * There are also a lot of personal non-work goals like saving for retirement or raising kids which require high autonomy over a long period of time. * Small groups of people like a startup can work autonomously for years without going off the rails like a group of LLMs probably would after a while (e.g. the Claude bliss attractor).
1Stephen McAleese3h
Excellent post, thank you for taking the time to articulate your ideas in a high-quality and detailed way. I think this is a fantastic addition to LessWrong and the Alignment Forum. It offers a novel perspective on AI risk and does so in a curious and truth-seeking manner that's aimed at genuinely understanding different viewpoints. Here are a few thoughts on the content of the first post: I like how it offers a radical perspective on AGI in terms of human intelligence and describes the definition in an intuitive way. This is necessary as increasingly AGI is being redefined as something like "whatever LLM comes out next year". I definitely found the post illuminating and resulted in a perspective shift because it described an important but neglected vision of how AGI might develop. It feels like the discourse around LLMs is sucking the oxygen out of the room, making it difficult to seriously consider alternative scenarios. I think the basic idea in the post is that LLMs are built by applying an increasing amount of compute to transformers trained via self-supervised or imitation learning but LLMs will be replaced by a future brain-like paradigm that will need much less compute while being much more effective. This is a surprising prediction because it seems to run counter to Rich Sutton's bitter lesson which observes that, historically, general methods that leverage computation (like search and learning) have ultimately proven more effective than those that rely on human-designed cleverness or domain knowledge. The post seems to predict a reversal of this long-standing trend (or I'm just misunderstanding the lesson), where a more complex, insight-driven architecture will win out over simply scaling the current simple ones. On the other hand, there is an ongoing trend of algorithmic progress and increasing computational efficiency which could smoothly lead to the future described in this post (though the post seems to describe a more discontinuous break between
TurnTrout's shortform feed
TurnTrout
Ω 106y
Knight Lee2h10

This is a little off topic, but do you have any examples of counter-reactions overall drawing things into the red?

With other causes like fighting climate change and environmentalism, it's hard to see any activism being a net negative. Extremely sensationalist (and unscientific) promotions of the cause (e.g. The Day After Tomorrow movie) do not appear to harm it. It only seems to move the Overton window in favour of environmentalism.

It seems, most of the counter-reaction doesn't depend on your method of messaging, it results from the success of your messagi... (read more)

Reply11
7Elizabeth3h
On the other hand, we should expect that the first people to speak out against someone will be the most easily activated (in a neurological sense)- because of past trauma, or additional issues with the focal person, or having a shitty year. Speaking out is partially a function of pain level, and pain(Legitimate grievance + illegitimate grievance) > pain(legitimate grievance). It doesn't mean there isn't a legitimate grievance large enough to merit concern. 
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with
GOOGLEGITHUB
AI-202X: a game between humans and AGIs aligned to different futures?
1
StanislavKrym
2h

 We have a lot of uncertainty over what goals might arise in early AGIs. There is no consensus in the literature about this—see our AI Goals Supplement for a more thorough discussion and taxonomy of the possibilities.

The AI-2027 forecast on alignment of Agents-3 and 4 

Oversight Committee is also encountering deeper philosophical questions, which they explore with the help of Safer-3. Can the Spec be rewritten to equally balance everyone’s interests? Who is “everyone”? All humans, or just Americans? Or a weighted compromise between different views, where each member of the Oversight Committee gets equal weight? Should there be safeguards against the Oversight Committee itself becoming too power-hungry? And what does it mean to balance interests, anyway?

The slowdown branch of the AI-2027 forecast 

We don’t endorse many actions in

...
(Continue Reading – 4502 more words)
Nina Panickssery's Shortform
Nina Panickssery
Ω 46mo
2the gears to ascension15h
i want drastically upgraded biology, potentially with huge parts of the chemical stack swapped out in ways I can only abstractly characterize now without knowing what the search over viable designs will output. but in place, without switching to another substrate. it's not transhumanism, to my mind, unless it's to an already living person. gene editing isn't transhumanism, it's some other thing; but shoes are transhumanism for the same reason replacing all my cell walls with engineered super-bio nanotech that works near absolute zero is transhumanism. only the faintest of clues what space an ASI would even be looking in to figure out how to do that, but it's the goal in my mind for ultra-low-thermal-cost life. uploads are a silly idea, anyway, computers are just not better at biology than biology. anything you'd do with a computer, once you're advanced enough to know how, you'd rather do by improving biology
Nina Panickssery2h20

computers are just not better at biology than biology. anything you'd do with a computer, once you're advanced enough to know how, you'd rather do by improving biology

I share a similar intuition but I haven't thought about this enough and would be interested in pushback!

it's not transhumanism, to my mind, unless it's to an already living person. gene editing isn't transhumanism

You can do gene editing on adults (example). Also in some sense an embryo is a living person.

Reply
The best simple argument for Pausing AI?
71
Gary Marcus
1d

Not saying we should pause AI, but consider the following argument:

  1. Alignment without the capacity to follow rules is hopeless. You can’t possibly follow laws like Asimov’s Laws (or better alternatives to them) if you can’t reliably learn to abide by simple constraints like the rules of chess.
  2. LLMs can’t reliably follow rules. As discussed in Marcus on AI yesterday, per data from Mathieu Acher, even reasoning models like o3 in fact empirically struggle with the rules of chess. And they do this even though they can explicit explain those rules (see same article). The Apple “thinking” paper, which I have discussed extensively in 3 recent articles in my Substack, gives another example, where an LLM can’t play Tower of Hanoi with 9 pegs. (This is not a token-related
...
(See More – 76 more words)
6Daniel Kokotajlo3h
Would you agree that AI R&D and datacenter security are safety-critical domains?  (Not saying such deployment has started yet, or at least not to a sufficient level to be concerned. But e.g. I would say that if you are going to have loads of very smart AI agents doing lots of autonomous coding and monitoring of your datacenters, analogous to as if they were employees, then they pose an 'insider threat' risk, and could potentially e.g. sabotage their successor systems or the alignment or security work happening in the company. Misalignments in these sorts of AIs could, in various ways, end up causing misalignments in successor AIs. During an intelligence explosion / period of AI R&D automation, this could result in misaligned ASI. True, such ASI would not be deployed outside the datacenter yet, but I think the point to intervene is before then, rather than after.)
boazbarak2h50

I think "AI R&D" or "datacenter security" are a little too broad.

I can imagine cases where we could deploy even existing models as an extra layer for datacenter security (e.g. anomaly detection). As long as this is for adding security (not replacing humans), and we are not relying on 100% success of this model, then this can be a positive application, and certainly not one that should be "paused."

With AI R&D again the question is how you deploy it, if you are using a model in containers supervised by human employees then that's fine. If you are let... (read more)

Reply
1Matrice Jacobine8h
FTR: You can choose your own commenting guidelines when writing or editing a post in the section "Moderation Guidelines".
5habryka8h
Moderation privileges require passing various karma thresholds (for frontpage posts, it's 2000 karma, I think, for personal posts it's 50).
recursive self-improvement
AI Safety Thursdays: Are LLMs aware of their learned behaviors?
LessWrong Community Weekend 2025
Cole Wyeth1d6037
The best simple argument for Pausing AI?
Welcome to lesswrong! I’m glad you’ve decided to join the conversation here.  A problem with this argument is that it doesn’t prove we should pause AI, only that we should avoid deploying AI in high impact (e.g. military) applications. Insofar as LLMs can’t follow rules, the argument seems to indicate that we should continue to develop the technology until it can. Personally, I’m concerned about the type of AI system which can follow rules, but is not intrinsically motivated to follow our moral rules. Whether LLMs will reach that threshold is not clear to me (see https://www.lesswrong.com/posts/vvgND6aLjuDR6QzDF/my-model-of-what-is-going-on-with-llms) but this argument seems to cut against my actual concerns. 
habryka1d*4732
Don't Eat Honey
My guess is this is obvious, but IMO it seems extremely unlikely to me that bee-experience is remotely as important to care about as cow experience. Enough as to make statements like this just sound approximately insane:  > 97% of years of animal life brought about by industrial farming have been through the honey industry (though this doesn’t take into account other insect farming). Like, no, this isn't how this works. This obviously isn't how this works. You can't add up experience hours like this. At the very least use some kind of neuron basis. > The median estimate, from the most detailed report ever done on the intensity of pleasure and pain in animals, was that bees suffer 7% as intensely as humans. The mean estimate was around 15% as intensely as people. Bees were guessed to be more intensely conscious than salmon! If anyone remotely thinks a bee suffering is 15% (!!!!!!!!) as important as a human suffering, you do not sound like someone who has thought about this reasonably at all. It is so many orders of magnitude away from what sounds reasonable to me that I find myself wanting to look somewhere else but the arguments in things like the Rethink Priorities report (which I have read, and argued with people about for many hours, and still sound insane to me, and IMO do not hold up), but instead look towards things like there being some kind of social signaling madness where someone is trying to signal commitment to some group standard of dedication, which involves some runaway set of extreme beliefs. Edit: And to avoid a slipping of local norms here. I am only leaving this comment here now after I have seriously entertained the hypothesis that I might be wrong, that maybe there do exist good arguments for moral weights that seem crazy to from where I was originally, but no, after looking into the arguments for quite a while, they still seem crazy to me, and so now I feel comfortable moving on and trying to think about what psychological or social process produces posts like this. And still, I am hesitant about it, because many readers have probably not gone through the same journey, and I don't want a culture of dismissing things just because they are big and would imply drastic actions.
Kaj_Sotala7h2211
Authors Have a Responsibility to Communicate Clearly
It used to be that I would sometimes read something and interpret it to mean X (sometimes, even if the author expressed it sloppily). Then I would say "I think the author meant X" and get into arguments with people who thought the author meant something different. These arguments would be very frustrating, since no matter how certain I was of my interpretation, short of asking the author there was no way to determine who was right. At some point I realized that there was no reason to make claims about the author's intent. Instead of saying "I think the author meant X", I could just say "this reads to me as saying X". Now I'm only reporting on how I'm personally interpreting their words, regardless of what they might have meant. That both avoids pointless arguments about what the author really meant, and is more epistemically sensible, since in most cases I don't know that my reading of the words is what the author really intended. Of course, sometimes I might have reason to believe that I do know the author's intent. For example, if I've spent quite some time discussing X with the author directly, and have a good understanding of how they think about the topic. In those cases I might still make claims of their intent. But generally I've stopped making such claims, which has saved me from plenty of pointless arguments.
Load More
84
Proposal for making credible commitments to AIs.
Cleo Nardo
1d
33
150
X explains Z% of the variance in Y
Leon Lang
4d
23
410A case for courage, when speaking of AI danger
So8res
5d
44
84Authors Have a Responsibility to Communicate Clearly
TurnTrout
10h
15
342A deep critique of AI 2027’s bad timeline models
titotal
12d
39
469What We Learned from Briefing 70+ Lawmakers on the Threat from AI
leticiagarcia
1mo
15
340the void
Ω
nostalgebraist
21d
Ω
98
534Orienting Toward Wizard Power
johnswentworth
1mo
142
660AI 2027: What Superintelligence Looks Like
Ω
Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, romeo
3mo
Ω
222
206Foom & Doom 1: “Brain in a box in a basement”
Ω
Steven Byrnes
8d
Ω
78
87What We Learned Trying to Diff Base and Chat Models (And Why It Matters)
Ω
Clément Dumas, Julian Minder, Neel Nanda
1d
Ω
0
286Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems)
Ω
LawrenceC
20d
Ω
19
71The best simple argument for Pausing AI?
Gary Marcus
1d
10
159My pitch for the AI Village
Daniel Kokotajlo
7d
29
418Accountability Sinks
Martin Sustrik
2mo
57
Load MoreAdvanced Sorting/Filtering
Sam Marks6h191
0
The "uncensored" Perplexity-R1-1776 becomes censored again after quantizing Perplexity-R1-1776 is an "uncensored" fine-tune of R1, in the sense that Perplexity trained it not to refuse discussion of topics that are politically sensitive in China. However, Rager et al. (2025)[1] documents (see section 4.4) that after quantizing, Perplexity-R1-1776 again censors its responses: I found this pretty surprising. I think a reasonable guess for what's going on here is that Perplexity-R1-1776 was finetuned in bf16, but the mechanism that it learned for non-refusal was brittle enough that numerical error from quantization broke it. One takeaway from this is that if you're doing empirical ML research, you should consider matching quantization settings between fine-tuning and evaluation. E.g. quantization differences might explain weird results where a model's behavior when evaluated differs from what you'd expect based on how it was fine-tuned. 1. ^ I'm not sure if Rager et al. (2025) was the first source to publicly document this, but I couldn't immediately find an earlier one.
Mikhail Samin11h26-4
5
i made a thing! it is a chatbot with 200k tokens of context about AI safety. it is surprisingly good- better than you expect current LLMs to be- at answering questions and counterarguments about AI safety. A third of its dialogues contain genuinely great and valid arguments. You can try the chatbot at https://whycare.aisgf.us (ignore the interface; it hasn't been optimized yet). Please ask it some hard questions! Especially if you're not convinced of AI x-risk yourself, or can repeat the kinds of questions others ask you. Send feedback to ms@contact.ms. A couple of examples of conversations with users:
leogao1d630
7
random brainstorming ideas for things the ideal sane discourse encouraging social media platform would have: * have an LM look at the comment you're writing and real time give feedback on things like "are you sure you want to say that? people will interpret that as an attack and become more defensive, so your point will not be heard". addendum: if it notices you're really fuming and flame warring, literally gray out the text box for 2 minutes with a message like "take a deep breath. go for a walk. yelling never changes minds" * have some threaded chat component bolted on (I have takes on best threading system). big problem is posts are fundamentally too high effort to be a way to think; people want to talk over chat (see success of discord). dialogues were ok but still too high effort and nobody wants to read the transcript. one stupid idea is have an LM look at the transcript and gently nudge people to write things up if the convo is interesting and to have UI affordances to make it low friction (eg a single button that instantly creates a new post and automatically invites everyone from the convo to edit, and auto populates the headers) * inspired by the court system, the most autistically rule following part of the US government: have explicit trusted judges who can be summoned to adjudicate claims or meta level "is this valid arguing" claims. top level judges are selected for fixed terms by a weighted sortition scheme that uses some game theoretic / schelling point stuff to discourage partisanship * recommendation system where you can say what kind of stuff you want to be recommended in some text box in the settings. also when people click "good/bad rec" buttons on the home page, try to notice patterns and occasionally ask the user whether a specific noticed pattern is correct and ask whether they want it appended to their rec preferences * opt in anti scrolling pop up that asks you every few days what the highest value interaction you had recently on the
johnswentworth2dΩ411201
26
I was a relatively late adopter of the smartphone. I was still using a flip phone until around 2015 or 2016 ish. From 2013 to early 2015, I worked as a data scientist at a startup whose product was a mobile social media app; my determination to avoid smartphones became somewhat of a joke there. Even back then, developers talked about UI design for smartphones in terms of attention. Like, the core "advantages" of the smartphone were the "ability to present timely information" (i.e. interrupt/distract you) and always being on hand. Also it was small, so anything too complicated to fit in like three words and one icon was not going to fly. ... and, like, man, that sure did not make me want to buy a smartphone. Even today, I view my phone as a demon which will try to suck away my attention if I let my guard down. I have zero social media apps on there, and no app ever gets push notif permissions when not open except vanilla phone calls and SMS. People would sometimes say something like "John, you should really get a smartphone, you'll fall behind without one" and my gut response was roughly "No, I'm staying in place, and the rest of you are moving backwards". And in hindsight, boy howdy do I endorse that attitude! Past John's gut was right on the money with that one. I notice that I have an extremely similar gut feeling about LLMs today. Like, when I look at the people who are relatively early adopters, making relatively heavy use of LLMs... I do not feel like I'll fall behind if I don't leverage them more. I feel like the people using them a lot are mostly moving backwards, and I'm staying in place.
Raemon5h96
0
TAP for fighting LLM-induced brain atrophy: "send LLM query" ---> "open up a thinking doc and think on purpose." What a thinking doc looks varies by person. Also, if you are sufficiently good at thinking, just "think on purpose" is maybe fine, but, I recommend having a clear sense of what it means to think on purpose and whether you are actually doing it. I think having a doc is useful because it's easier to establish a context switch that is supportive of thinking. For me, "think on purpose" means: * ask myself what my goals are right now (try to notice at least 3) * ask myself what would be the best think to do next (try for at least 3 ideas) * flowing downhill from there is fine
Load More (5/38)