LLMs also just have their own quirks, and I think Qwen might just really like hell and cats?. For example, Claude Sonnet seems to really like bioluminescence as a topic, reliably enough in different instances to where Janus gets some impressive predictive accuracy.
Anybody interested in this topic should absolutely read the Niskanen report on healthcare abundance, which goes into excruciating detail on how over-regulation and entrenched interests have kneecapped the supply of healthcare (doctors, hospitals, clinics, hospital beds, etc) to the detriment of society overall.
But I think that you're losing sight of my point that these arguments have all served to commit mass murder on people with much lower mental abilities than the average human.
If this is the "point," then your comment reduces to an invalid appeal-to-consequences argument. The fact that some people use an argument for morally evil purposes tells us nothing about the logical validity of that argument. After all, Evil can make use of truth (sometimes selectively) just as easily as Good can; we don't live in a fairy tale where trade-offs between Good and Truth a...
I think that these are all pretty relevant ways to think about being an EA, but are mostly of a different fundamental type than the thing I'm pointing at. Let me get a bit more into the aforementioned math to show why this is approximately a binary categorization along the axis I was pointing at in this post.
Say that there are three possible world states:
As a culture, on average, o...
Currently you can't discharge student loans in bankruptcy. I think it would be good if you could. But then people might declare bankruptcy immediately after graduating, to the point that people wouldn't be able to get student loans. Allowing lenders to repossess degrees in bankruptcy would be one way to mostly resolve this.
One is not philosophically obliged to regard the nature of reality as ineffable or inescapably uncertain.
Quarks are a good place to explore this point. The human race once had no concept of quarks. Now it does. You say that inevitably, one day, we'll have some other concept. Maybe we will. But why is that inevitable? Why can't quarks just turn out to be part of how reality actually is?
You cite Nagarjuna and talk about emptiness, so that gives me some idea of where you are coming from. This is a philosophy which emphasizes the role of concepts i...
You could say there are two conflicting scenarios here: superintelligent AI taking over the world, and open-source AI taking over daily life. In the works that you mention, superintelligence comes so quickly that AI mostly remains a service offered by a few big companies, and open-source AI is just somewhere in the background. In an extreme opposite scenario, superintelligence might take so long to arrive, that the human race gets completely replaced by human-level AI before superintelligent AI ever exists.
It would be healthy to have all kinds of com...
Sorry that you also lost your mom. 🫂
A sentiment that didn't quite make it into the piece is that my anger and grief has been transformed into steadfastness by my love for her. The idea for this post came from a sense of determination that her death would mean something to others. That steadfastness has also given new fuel to my other projects. I'm determined to get my book finished in time to influence the course of AI. I'm also determined to live the best life I can, and one worthy of my mom's sense of fun, if we really do only have dozens of months left.
Not sure if this is relevant, but when I make subtitles for videos, I try to remove some unnecessary words. For example, if someone says "two plus two is... uhm, equals... uhm, four", I write "two plus two equals four". This is better in two ways: first, no one really cares about the "uhm"; second, shorter subtitles are easier to read.
The idea that Chinchilla scaling might be slowing comes from the fact that we've seen a bunch of delays and disappointments in the next generation of frontier models.
GPT 4.5 was expensive and it got yanked. We're not hearing rumors about how amazing GPT 5 is. Grok 3 scaled up and saw some improvement, but nothing that gave it an overwhelming advantage. Gemini 2.5 is solid but not transformative.
Nearly all the gains we've seen recently come from reasoning, which is comparatively easy to train into models. For example, DeepScaleR is a 1.8B parameter local mo...
Probably worth noting that there's lots of frames to pick from, of which you've discussed two: question, ideology, project, obligation, passion, central purpose, etc.
Somehow this has escaped comment, so I'll have a go. I write from the perspective of whether it's suitable as the value system of a superintelligence. If PRISM became the ethical operating system of a posthuman civilization born on Earth, for as long as that civilization managed to survive in the cosmos - would that be a satisfactory outcome?
My immediate thoughts are: It has a robustness, due to its multi-perspective design, that gives it some plausibility. At the same time, it's not clear to me where the seven basis worldviews come from. Why those s...
Kudos for going so in-depth on this.
And finally: why is October the First "too late"?
None of the speculations here seem convincing to me. I'd expect there to be a simple "key" that "unlocks" the story and clearly makes its overall meaning clear (see Suzanne Delage), and none of this feels like it fits.
One element that struck with me, which you don't mention, is:
...[T]he Institute attempts a radical re-centering of the human condition upon this pataphysical temporal locus, an eternal September. Everything is coming to be, or has become, through the 30th of Sep
That has been the default strategy for many years and it failed dramatically.
All the "convinced influential people in tech", started making their own AI start-ups, while comming up with galaxy-brained rationalizations why everything will be okay with their idea in particular. We tried to be nice to them in order not to lose our influence with them. Turned out we didn't have any. While we carefully and respectfully showed the problems with their reasoning, they likewise respectfully nodded their heads and continued to burn the AI timelines. Who could'...
(A small rant, sorry) In general, it seems you're massively overanchored on current AI technology, to an extent that it's stopping you from clearly reasoning about future technology. One example is the jailbreaking section:
There has been no noticeable trend toward real jailbreak resistance as LLMs have progressed, so we should probably anticipate that LLM-based AGI will be at least somewhat vulnerable to jailbreaks.
You're talking about AGI here. An agent capable autonomously doing research, play games with clever adversaries, detecting and patching i...
Yes, my disagreement was mostly with the first paragraph, which read to me like "who are you going to believe, the expert or your own lying eyes". I'm not an expert, but I do have a sense of aesthetics, that sense of aesthetics says the cover looks bad, and many others agree. I don't care if the cover was designed by a professional; to shift my opinion as a layperson, I would need evidence that the cover is well-received by many more people than dislike it, plus A/B tests of alternative covers that show it can't be easily improved upon.
That said, I also di...
I don't think there will be an age of abundance :-(
As for unemployment, it feels a bit weird that 1) everyone I know outside FAANG, including me, feels that finding jobs has become much harder 2) the statistics say today's unemployment rate is kinda low and unremarkable.
"Write like you talk" depends on which language you are talking about.
Take Arabic. Written Arabic and spoken Arabic has diverged enormously compared to written English and spoken English. Modern Standard Arabic (MSA) is the formal written language for books, newspapers, speeches etc. But no sane person speaks it. There are a lot of spoken dialects (like Egyptian, Levantine, Gulf Arabic, etc.). A speaker of different dialects may not understand other dialects or MSA, because all the vocabulary and grammar is different, which isn't usually the case in English.
Written and spoken English are similar to each other compared to most other languages.
Yeah, having the group of smart people who can work together is a crucial ingredient, and the attempts to replicate the success with different teams will fails miserably. And the next step is giving those people autonomy. What is the point of hiring people who are smart, good at their work, often obsessed with their work... and then having them micromanaged by someone who probably couldn't write a short shell script if their life depended on it?
The Scrum Guide is basically about how to get rid of managers, without everything falling apart. (All the bureauc...
Link is dead; here's an archive. (It's the podcast Conversations from the Pale Blue Dot, episode 75).
Scott is often considered a digressive or even “astoundingly verbose” writer.
This made me realise that as a reader I care about, not so much "information & ideas per word" (roughly speaking), but "per unit of effort reading". I'm reminded of Jason Crawford on why he finds Scott's writing good:
Most writing on topics as abstract and technical as his struggles just not to be dry; it takes effort to focus, and I need energy to read them. Scott’s writing flows so well that it somehow generates its own energy, like some sort of perpetual motion machine.
My fa...
Would anyone be interested in having a conversation with me about morality? Either publicly[1] or privately.
I have some thoughts about morality but I don't feel like they're too refined. I'm interested in being challenged and working through these thoughts with someone who's relatively knowledgeable. I could instead spend a bunch of time eg. digging through the Stanford Encyclopedia of Philosophy to refine my thoughts, but a) I'm not motivated enough to do that and b) I think it'd be easier and more fun to have a conversation with someone about it.
The statements “gives” you information but that doesn’t “count” as you “getting” information.
It's literally true that I got information, but I didn't get information from it in the ordinary sense of "I parsed his words, and his words said something about X, so now I know the thing about X that is described by his words".
There's a difference between the information content of the statement, and the information that may be concluded from the statement in context. For instance, if I ask someone a question and he responds by snoring I may conclude that he ...
At least in democracies, convincing the people of something is an effective way to get politicians to pay attention to it - their job depends on getting these people to vote for them.
Notably in the UK, David Cameron gave the people a vote on whether to leave the EU because this was an idea that was gaining popularity. He did this despite not himself believing in the idea.
Naturally, plenty of legislation also gets passed without most people noticing, and in this respect we are better off convincing lawmakers. But I think that if we are able to c...
“Clarity didn’t work, trying mysterianism” is the title of a short story by Scott Alexander
Was it the title? I always thought Scott used the phrase as commentary on why he was posting the story, same as gwern is doing here. As in, he tried to clearly say "an omnipresent personal AI agent that observes your life and directly tells you the best way to act in every situation you encounter would be a bad thing because building up your own mind into being able to overcome challenging situations is necessary for a meaningful life", people didn't buy it, and t...
Sometimes the point is specifically to not update on the additional information, because you don't trust yourself to update on it correctly.
Classic example: "Projects like this usually take 6 months, but looking at the plan I don't see why it couldn't be done in 2... wait, no, I should stick to the reference class forecast."
I am confused and feel like I must be misunderstanding your point. It feels like you're attempting a "gotcha" argument, but I don't understand your point or who you're trying to criticize. It seems like bizarre rhetorical practice. It is not a valid argument to say that "people can hold position A for bad reason X, therefore all people who hold position A also hold it for bad reason X even if they claim it is for good reason Y". But that seems to be your argument?
I think you're overinterpreting my comment and attributing to me the least charitable plausibl...
I fear your concerns are very real. I've spent a lot of time running experiments on the mid-sized Qwen3 models (32B, 30B A3B), and they are strongly competitive with frontier models up through gpt-4o-1120. The latter writes better and has more personality, but the former are more likely to pass your high school exams.
What happened here? Well, two things. First, the Alibaba Group is competent and knows what it's doing. But more importantly, it turned out that "reasoning" was surprisingly easy, and everyone cloned it within a few months, sometimes on budgets...
My thoughts about the story, perhaps interesting to any future reader trying to decipher the "mysterianism" here, a la these two analyses of sci-fi short stories, or my own attempts at exegeses of videogames like Braid or The Witness. Consider the following also as a token of thanks for all the enjoyment I've recieved from reading Gwern's various analyses over the years.
(To clarify, I strong disagree voted, I haven't downvoted at all - I still strongly disagree)
I am confused and feel like I must be misunderstanding your point. It feels like you're attempting a "gotcha" argument, but I don't understand your point or who you're trying to criticize. It seems like bizarre rhetorical practice. It is not a valid argument to say that "people can hold position A for bad reason X, therefore all people who hold position A also hold it for bad reason X even if they claim it is for good reason Y". But that seems to be your argument? ...
The problem with non-open-weight models is that they need to be exfiltrated before wrecking havoc, while open-weight models cannot avoid being evaluated. Suppose that the USG decides that all open-weight models are to be tested by OpenBrain for being aligned or misaligned. Then even a misaligned Agent-x has no reason to blow its cover by failing to report an open-weight rival.
The statements "gives" you information but that doesn't "count" as you "getting" information. Furthermore the "low-information" statement mysteriously gives you information, yet not quite enough information to count as not a "low-information" statement. Okay, so this isn't about communicating information, it's about communicating information with a twist -- it also has to count. If your interlocutor communicates successfully but it doesn't count, you're allowed to make a definition challenge, where they have to provide a set of criteria you're allowed to a...
You strong disagree downvoted my comment, but it's still not clear to me that you actually disagree with my core claim. I'm not making a claim about priors, or whether it's reasonable to think that p(doom) might be non-negligible a priori.
My point is instead about whether the specific technical details of deep learning today are ultimately what's driving some people's high probability estimates of AI doom. If the intuition behind these high estimates could've been provided in the 19th century (without modern ML insights), then modern technical arguments do...
Domain: Other Lists like This
Link: Map of Reddit (warning: pressing enter does not work in the search box, you have to click on a suggested subreddit in the dropdown)
Author(s): Andriy Kashcha
Type: Interactive Chart
Why: Groups Reddit's subreddits into categories & shows subreddits related to a given one.
I imagine most disagreement comes from the first paragraph.
The problem with assuming that since the publisher is famous their design is necessarily good is that even huge companies make much worse baffling design decisions all the time, and in this case one can directly see the design and know that it's not great – the weak outside-view evidence that prestigious companies usually do good work doesn't move this very much.
Thanks, that's all relevant and useful!
Simplest first: I definitely envision a hierarchy of reporting and reviewing questionable requests. That seem like an obvious and cheap route to partly address the jailbreaking/misuse issues.
I've also envisioned smarter LLM agents "thinking through" the possible harms of their actions, and you're right that does need at least a pretty good grasp on human values. Their grasp on human values is pretty good and likely to get better, as you say. I haven't thought of this as value alignment, though, because I've assumed th...
Oh I see. After saying "I don't like chemicals in my food" I understood something like, this person prefers organics. Are you not able to surmise this? If you're not, then you're definitely gaining less information when talking people than I am. I can generally communicate with people even if they use "chemical" imprecisely.
A good summary, but it's worth noting that while the death penalty for failing to fight was on the books, Byng's execution was the only time it was ever actually carried out. It's a bit similar to how the US military legally has the authority to execute deserters, but in the past century has only ever exercised this once out of tens of thousand of sentences (Eddie Slovak during WWII).
From reading the autobiography of Lord Cochrane, an insanely aggressive and insanely successful captain during the Napoleonic Wars, my impression is that the Royal Navy was ve...
When furnishing Lighthaven I was also very surprised how little capsules from capsule hotels optimized around sound isolation. My sense is that it's partially the result of building codes making it so that the more you make things out of real walls, the more you risk being classified as a room (instead of a piece of furniture), which would make you illegal. Many places like San Francisco also require capsules in capsule hotels to not have any doors, but instead to just use curtains, which also completely trashes sound isolation.
This is very concerning, and consistent with other patterns I've noted across studies of a variety of sorts of misaligned model behavior: reasoning-trained models appear to be not just more capable of this, but also more prone to being willing to do it. It suggests that successfully aligning reasoning-trained models is a harder problem. I suspect we'll need to find a solution that can be intermixed or combined during reasoning training.
I used to do graphic design professionally, and I definitely agree the cover needs some work.
I put together a few quick concepts, just to explore some possible alternate directions they could take it:
https://i.imgur.com/zhnVELh.png
https://i.imgur.com/OqouN9V.png
https://i.imgur.com/Shyezh1.png
These aren't really finished quality either, but the authors should feel free to borrow and expand on any ideas they like if they decide to do a redesign.
The deceptive element involved here feels like this will be in the category of alignment techniques that get increasingly hard to use successfully as model capabilities go up. Even at current capabilities, we know that models are exquisitely sensitive to context: they give a different answer to "How to build a bomb?" in a D&D context or a Minecraft context than a real-world context, and we know there are activation directions that trigger those common contexts. So when generating the synthetic fine-tuning data for this, I think you'd need to pay carefu...
Very exciting; thanks for writing!
I know this is minor, but the image on the bottom of the website looks distractingly wrong to me -- the lighting doesn't match where real population centers are. It would be a lot better with something either clearly adapted from the real world or something clearly created, but this is pretty uncanny valley.
It was learning to propose coding tasks that were hard, but not impossible, and to solve these coding tasks, recursively. Most coding tasks are "ethically neutral" — they don't contain any evidence that anyone is trying to do anything good, or bad. We know there are exceptions: the phenomenon of emergent misalignment makes it clear that models have strong ethical intuitions about insecure code being inherently bad, to the point where if you fine-tune them to write insecure code they 'emergently' become much more likely to do all sorts of other ethically un...
Fascinating! I was excited when Goodfire's API came out (to the point of applying to work there), but have since been unable to take the time to explore this in more detail, so it's nice to read about someone doing so.
A few quick comments:
I enjoyed the post, but I don't think people come to this website to enjoy, they come to improve themselves as if they wanted to have fun I'd imagine they'd go somewhere else. (as stated in the article: Netflix, doom scrolling on TikTok, alcohol, weed, video games, porn).
The only value this post brings is that it's enjoyable to read, but it actually doesn't do anything for you besides that. Sure, you can relate to the post, it makes you feel heard and feeling like somebody understands you is a good feeling. But it doesn't provide any antidotes.
What will ha...
What you are describing as the "aristocratic system," I think better called the Feudal arrangements, continued later into the industrial period most famously in the American South, where large estates were becoming increasingly economically viable with the combination of slave labor and mechanized processing of cotton. Some old world cultural expressions of medieval chivalry not only had persisted there but were becoming more popular, with a craze for dueling, a deadly menace mentioned repeatedly in the press. In spite of the aristocratic cultural vigor an...
Something I've done in the past is to send text that I intended to be translated through machine translation, and then back, with low latency, and gain confidence in the semantic stability of the process.
Rewrite english, click, click.
Rewrite english, click, click.
Rewrite english... click, click... oh! Now it round trips with high fidelity. Excellent. Ship that!
To me, "writing how you talk" also stands in for, like, writing with good auditory flow. I often consider how many syllables a sentence has, and how they roll off the tongue. In some sense this matters less when people are reading silently, but since (most?) readers use their inner voice, lyrical language can still be valuable. This is another pretty strong injunction against long sentences; it's hard to imagine them being spoken, and so it's hard for them to be beautiful/aesthetic. It's also an argument for using lots of commas, to help show the reader when their inner orator should breathe.
Online advertising can be used to promote books. Unlike many books, you are not trying to make a profit and can pay for advertising beyond where the publisher's marginal costs equals marginal revenue. Do you:
This feels very timely for me. My partner has suffered from chronic back pain (out of nowhere) for the last few years and we've been experimenting with various PRP, Glucose injections which have sadly, not provided the relief she needed.
I spend a great deal of time studying neuropsychology for uni and have been talking to some of the doctors about some cognitive options but it has mostly fallen on deaf ears. Your post made me buy that book—so thank you. I think we're going to give it a go, if nothing happens, we are no worse off.
I spent 15 months working for ARC Theory. I recently wrote up why I don't believe in their research. If one reads my posts, I think it should become very clear to the reader that either ARC's research direction is fundamentally unsound, or I'm still misunderstanding some of the very basics after more than a year of trying to grasp it. In either case, I think it's pretty clear that it was not productive for me to work there. Throughout writing my posts, I felt an intense shame imagining re...
One of the more interesting strategic questions: of the current leading foundation model labs, which of them is run by leaders who (as far as we can tell from their publicly known actions and opinions) are clearly not a psychopath, narcissist, pathological liar, political extremist, or otherwise have very concerning psychological tendencies or instabilities? This seems like a very important consideration for anyone considering working at any of these companies, and could turn out to be pivotal rather soon.
(Personally, I'm not aware of any significant conce...
Let me throw in a a third viewpoint as well as math and psychology/neuroscience: physics. Or more specifically, calculus and non-linear systems. Let me give you an example: Value Learning. Human values are complex, and even though LLMs are good at understanding human complexity, alignment is hard and we're unlikely to get it perfect on the first shot. But AGi, by definition, isn't dumb, so it will understand that. If it is sufficiently close to aligned, it will want to do what we want, so it will regard not being perfectly aligned as a flaw in itself, and ...
For what it's worth, I consider problem 1 to be somewhat less of a showstopper than you do, because of things like AI control (which while unlikely to scale to arbitrary intelligence levels, is probably useful for the problem of instrumental goals).
However, I do think problems 2 and 3 are a big reason why I'm less of a fan of deploying ASI/AGI widely like @joshc wants to do.
Something close to proliferation concerns (especially around bioweapons) is a big reason why I disagree with @Richard_Ngo on AI safety agreeing to be cooperative with open-source demand...
I see that the numbers indicate people disagree with this post. Since there are several clauses, it's hard to know which specifically (or all of them) are being disagreed with.
The second paragraph (beginning "Contrary to what you wrote...") is a list of factual statements, which as far as I can tell are all correct.
The third paragraph ("Most importantly, the title is plenty big...") is more subjective, but I'm currently not imagining that anyone is disagreeing with that paragraph (that is, that anyone thinks "actually, the title is too small").
The fourth p...
It's actually $0.06 / pill, not $0.60. Doesn't make a big difference to your bottom line though as both costs are cheap.
🕯️
My mom died last December, and part of the grief is in how hard it is to say (to people who loved her, and miss her, like I do, but don't have the same awareness of history) what you've said here about your mom, and timelines, and how much potentially fantastic future our mothers missed out on. Thank you for putting some of that part of "that lonely part of the grief" into words.
My biggest concern about the Instruction Following/Do What I Mean and Check alignment target is that it doesn't help with coordination and conflict problems between human principals. As you note, frontier labs are already having to supplement it with refusal training to try to prevent entirely obvious and basic forms of misuse, and that is proving not robust to jailbreaking (to the level where bad actors have been running businesses that rely on sending jailbroken requests to Claude or ChatGPT and being able to consistently get results). Refusal training i...
right; I'm not making the claim that microplastics definitely have zero effects, or that we should halt research into them.
but I am making the claim that these sorts of risks — microplastics included — receive attention from lay people far outweighing their actual danger; and that a similar model of social exposure explains similar outcomes
let me draw an analogy to the microbes case: now that we have the scientific method, we can evaluate hypotheses like "failing to wash your hands before surgery causes a higher risk of infection", or "regions with st...
I still don't see it, sorry. If I think of deep learning as an approximation of some kind of simplicity prior + updating on empirical evidence, I'm not very surprised that it solves the capacity allocation problem and learns a productive model of the world. [1] The prize is that the simplicity prior doesn't necessarily get rid of scheming. The big extra challenge for heuristic explanations is that you need to do the same capacity allocation in a way that scheming reliably gets explained (even though it's not relevant for the model's performance a...
They say above "there may be methods to productively use process-based supervision while retaining the monitoring benefits of unrestricted CoTs, e.g. by only applying process-supervision to part of the CoT." This sounds like maybe they are talking about the shoggoth/face distinction, or something in that direction! Yay!
It seems like the suggestion here it to potentially apply less process-supervision (i.e. to part of the CoT). In this case, though, the more straight-forward fix to this issue is to apply more supervision. Specifically, they were not ap...
This is true of all teas. The decaf ones all are terrible. I spent a while trying them in the hopes of cutting down my caffeine consumption, but the taste compromise is severe. And I'd say that the black decaf teas were the best I tried, mostly because they tend to have much more flavor & flavorings, so there was more left over from the water or CO2 decaffeination...
In the Hornblower series of novels, at one point Captain Hornblower surrenders to the enemy during a naval battle. He is captured by the French, but later escapes. When he gets home, he's put on trial for surrendering. They finally acquit him when it is revealed that he had lost something like half (maybe two-thirds?) of his crew—basically massive casualties. But surrendering was considered guilty until proven innocent.
Do we know the tradition predates Christianity separating from Judaism? The particular story is later.
In fact, it seems at least possible (but I don't know how plausible) that the causation is the other way around: the story is supposed to tell the readers that Rabbi Yeshua's miracles don't prove he's right either.
There are some pretty important caveats:
@Jozdien talks more about this below:
Trainium is mostly a joke
I think it can help AWS with price-performance for the narrow goal of giant pretraining runs, where the capex on training systems might be the primary constraint on scaling soon. For reasoning training (if it does scale), building a single training system is less relevant, the usual geographically distributed inference buildout that hyperscalers are doing anyway would be about as suitable. And the 400K chip Rainier system indicates that it works well enough to ramp (serving as a datapoint in addition to on-paper specification).
...~$5 for a cup of coffee — that’s about an order of magnitude cheaper.
Are you buying your coffee from a cafe every day or something? You can buy a pack of nice grounds for like $13, and that lasts more than a month (126 Tbsp/pack / (3 Tbsp/day) = 42 days/pack), totaling 30¢/day. Half the cost of a caffeine pill. And that’s if you don’t buy bulk.
The "lightcone-eating" effect on the website is quite cool. The immediate obvious idea is to have that as a background and write the title inside the black area.
If one wanted to be cute you could even make the expansion vaguely skull-shaped; perhaps like so?
The prize for the most insightful comment goes to hliyan:
- About 8 years ago I was gifted a copy of Ray Dalio’s Principles. Being a process aficionado who thought the way to prevent bureaucracy was to ground process in principles, I was very excited. But halfway through I gave up. All the experience, the observations, the case studies that had led Dalio to each insight, had been lost in the distillation process. The reader was only getting a Plato’s Cave version.
I think the main reason is less captain's incentives but rather superior discipline and training for the crew. The Royal Navy was extremely "modern" in this regard, with highly trained crews, and a kind of industrial level of ship maintenance and supply. This gives you healthy, trained men on your ship, which you can then use to pursue your incentives. But the discipline and order comes first.