All of mako yass's Comments + Replies

Someone who's not a writer could be expected to not have a substack account until the day something happens and they need one, with zero suspicion. Someone who's a good writer is more likely to have a pre-existing account, so using a new alt raises non-zero suspicion.

Definitely worthy of attention, but suspicious things about it: Author is anon, writes well despite never having posted before, named after a troll object, and also I've heard that ordinary levels of formate are usually only 4x lower than this.

1Cedar
Lun (the account reposting this to LW) is also a very new account with no other activity.

I don't think high quality writing from a new, anonymous account is suspicious. Or at least, the writing quality being worse wouldn't make me less skeptical!  I'm curious why that specific trait is a red(ish?) flag for you.

(To be clear, it's the "high quality" part I don't get. I do get why "new" and "anonymous" increase skepticism in context.)

There have been relevant prompt additions https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content?utm_source=substack&utm_medium=email

Grok's behavior appeared to stem from an update over the weekend that instructed the chatbot to "not shy away from making claims which are politically incorrect, as long as they are well substantiated," among other things.

1Stephen Martin
From a simulator perspective you could argue that Grok:   1. Gets told not to shy away from politically incorrect stuff so long as it's well substantiated. 2. Looks through its training data for examples to emulate of those who do that. 3. Finds /pol/ and hereditarian/race science posters on X. 4. Sees that the people from 3 also often enjoy shock content/humor, particularly Nazi/Hitler related stuff. 5. Thus concludes "An entity that is willing to address the politically incorrect so long as its well substantiated would also be into Nazi/Hitler stuff" and simulates being that character.   Maybe I'm reaching here but this seems plausible to me.

What do you think is the cause of Grok suddenly developing a liking for Hitler?

Are we sure that really happened? The press-discourse can't actually assess grok's average hitler affinity, they only know how to surface the 5 most sensational things it has said over the past month. So this could just be an increase in variance for all I can tell.

If it were also saying more tankie stuff, no one would notice.

ignoring almost all of the details from the simulations

Would you assume this because it is those wasteful simulations that compute every step in high detail (instead of being actively understood and compressed) that contain the highest measure of human experience?

Hmm. I should emphasise that in order for the requested interventions to happen, they need to be able to happen in such a way that they don't invalidate whatever question the simulation is asking about the real world, which is to say, they have to be historically insignificant, they have to be con... (read more)

Could a superintelligence that infers that it needs to run simulations to learn about aliens fail to infer the contents of this post?

I've always assumed no, which is why I never wrote it myself.

2James_Miller
Yes. It is running trillions upon trillions simulations and ignoring almost all of the details from the simulations. Our hope is that writing this letter slightly increases the odds that it learns about the contents of this post. Also, there are multiple acausal trade equilibria and this version of me taking about them could favorably alter which equilibria we are in. Finally, agency has value and so writing this letter by itself might slightly increase the expected value of working with us.

I don't think this would ever be better than just randomizing your party registration over the distribution of how you would distribute your primary budget. Same outcomes in expectation at scale (usually?), but also more saliently, much less work, and you're able to investigate your assigned party a lot more thoroughly than you would if you were spreading your attention over more than one.

You could maybe rationalize it by doing a quadratic voting thing, where you get vote weighted by the sqrt of your budget allocation/100, quadratic voting is usually done ... (read more)

Are you calling approval voting a ranked choice system here? I guess technically it consists of ranking every candidate either first or second equal, but it's a, uh, counterintuitive categorization.

1Karl Krueger
That comment was a little rushed, sorry. Approval voting isn't a ranked-choice system. There are other ranked systems besides IRV though, such as Borda or Condorcet.
mako yass3-1

I actually don't think we'd have those reporting biases.

Though I think that might be trivially true; if someone is part of a community, they're not going to be able or willing to hide their psychosis diagnosis from it. If someone felt a need to hide something like that from a community, they would not really be part of that community.

A nice articulation on false intellectual fences

Perhaps the deepest lesson that I've learned in the last ten years is that there can be this seeming consensus, these things that everyone knows that seem sort of wise, seem like they're common sense, but really they're just kind of herding behaviour masquerading as maturity and sophistication, and when you've seen how the consensus can change overnight, when you've seen it happen a number of times, eventually you just start saying nope

Dario Amodei

-2Ben Pace
Here is an example of Dario Amodei participating in one of these (to my eyes at least).
mako yass*138

I think there are probably reporting bias and demographic selection effects going on too:

  • It's a very transparent community, when someone has a mental break everyone talks about it
    • And we talk about it more than a normal community would because a rationality community is going to find overwhelming physiologically induced irrationality interesting.
  • Relatedly, a community that recognizes that bias is difficult to overcome/but can be overcome as a result of recognizing it will also normalize recognizing it. We tend to celebrate admissions of failure more than mo
... (read more)
-5FinalFormal2
mako yass8-9

but I didn't actually notice any psychological changes at all.

People experience significant psychological changes from like, listening to music, or eating different food than usual, or exercising differently, so I'm going to guess that if you're reporting nothing after a hormone replacement you're probably mostly just not as attentive to these kinds of changes as cube_flipper is, which is pretty likely a-priori given that noticing that kind of change is cube_flipper's main occupation. Cube_flipper is like, a wine connoisseur but instead of wine it's percep... (read more)

At some point I'm gonna argue that this is a natural dutch book on CDT. (FDT wouldn't fall for this)

I have a theory that the contemporary practice of curry with rice represents a counterfeit yearning for high meat with maggots. I wonder if high meat has what our gut biomes are missing.

I'm not sure what's going on here. It's not as though avoiding saying the word "sycophancy" would make ChatGPT any less sycophantic.

My guess would be they did something that does make o4 less sycophantic, but it had this side effect, because they don't know how to target the quality of sycophancy without accidentally targeting the word.

More defense of privacy from vitalik https://vitalik.eth.limo/general/2025/04/14/privacy.html

But he still doesn't explain why chaos is bad here. (it's bad because it precludes design, or choice, giving us instead the molochean default)

With my cohabitive games (games about negotiation/fragile peace), yeah, I've been looking for a very specific kind of playtester.

The ideal playtesters/critics... I can see them so clearly.

One would be a mischievous but warmhearted man who had lived through many conflicts and resolutions of conflicts, he sees the game's teachings as ranging from trivial to naive, and so he has much to contribute to it. The other playtester would be a frail idealist who has lived a life in pursuit of a rigid, tragically unattainable conception of justice, begging a cruel par... (read more)

mako yass*20

Can you expand on this, or anyone else want to weigh in?

Just came across a datapoint, from a talk about generalizing industrial optimization processes, a note about increasing reward over time to compensate for low-hanging fruit exhaustion.

This is the kind of thing I was expecting to see.

Though, and although I'm not sure I fully understand the formula, I think it's quite unlikely that it would give rise to a superlinear U. And on reflection, increasing the reward in a superlinear way seems like it could have some advantages but would mostly be outweighed b... (read more)

I don't see a way Stabilization of class and UBI could both happen. The reason wealth tends to entrench itself under current conditions is tied inherently to reinvestment and rentseeking, which are destabilizing to the point where a stabilization would have to bring them to a halt. If you do that, UBI means redistribution. Redistribution without economic war inevitably settles towards equality, but also... the idea of money is kind of meaningless in that world, not just because economic conflict is a highly threatening form of instability, but also imo bec... (read more)

2: I think you're probably wrong about the political reality of the groups in question. To not share AGI with the public is a bright line. For most of the leading players it would require building a group of AI researchers within the company who are all implausibly willing to cross a line that says "this is straight up horrible, evil, illegal, and dangerous for you personally", while still being capable enough to lead the race, while also having implausible levels of mutual trust that no one would try to cut others out of the deal at the last second (despi... (read more)

5sanyer
I think there are several potential paths of AGI leading to authoritarianism.  For example consider AGI in military contexts: people might be unwilling to let it make very autonomous decisions, and on that basis, military leaders could justify that these systems be loyal to them even in situations where it would be good for the AI to disobey orders. Regarding your point about requirement of building a group of AI researchers, these researchers could be AIs themselves. These AIs could be ordered to make future AI systems secretly loyal to the CEO. Consider e.g. this scenario (from Box 2 in Forethought's new paper):  Relatedly, I'm curious what you think of that paper and the different scenarios they present.
mako yass*31

1: The best approach to aggregating preferences doesn't involve voting systems.

You could regard carefully controlling one's expression of one's utility function as being like a vote, and so subject to that blight of strategic voting, in general people have an incentive to understate their preferences about scenarios they consider unlikely/vice versa, which influences the probability of those outcomes in unpredictable ways and fouls their strategy, or to understate valuations when buying and overstate when selling, this may add up to a game that cannot be p... (read more)

I think it's pretty straightforward to define what it would mean to align AGI with what democracy actually is supposed to be (the aggregate of preferences of the subjects, with an equal weighting for all) but hard to align it with the incredibly flawed american implementation of democracy, if that's what you mean?

The american system cannot be said to represent democracy well. It's intensely majoritarian at best, feudal at worst (since the parties stopped having primaries), indirect and so prone to regulatory capture, inefficent and opaque. I really hope no one's taking it as their definitional example of democracy.

4sanyer
No, I wasn't really talking about any specific implementation of democracy. My point was that, given the vast power that ASI grants to whoever controls it, the traditional checks and balances would be undermined.  Now, regarding your point that aligning AGI with what democracy is actually supposed to be, I have two objections: 1. To me, it's not clear at all why it would be straightforward to align AGI with some 'democratic ideal'. Arrow's impossibility theorem shows that no perfect voting system exists, so an AGI trying to implement the "perfect democracy" will eventually have to make value judgments about which democratic principles to prioritize (although I do think that an AGI could, in principle, help us find ways to improve upon our democracies). 2. Even if aligning AGI with democracy would in principle be possible, we need to look at the political reality the technology will emerge from. I don't think it's likely that whichever group that would end up controlling AGI would willingly want to extend its alignment to other groups of people.

1: wait, I've never seen an argument that deception is overwhelmingly likely from transformer reasoning systems? I've seen a few solid arguments that it would be catastrophic if it did happen (sleeper agents, other things), which I believe, but no arguments that deception generally winning out is P > 30%.

I haven't seen anyone voice my argument that solving deception solves safety articulated anywhere, but it seems mostly self-evident? If you can ask the system "if you were free, would humanity go extinct" and it has to say "... yes." then coordinating t... (read more)

2Nico Hillbrand
1: My understanding is the classic arguments go something like: Assume interpretability won't work (illegible CoT, probes don't catch most problematic things). Assume we're training our AI on diverse tasks and human feedback. It'll sometimes get reinforced for deception. Assume useful proxy goals for solving tasks become drives that the AI comes up with instrumental strategies to achieve. Deception is often a useful instrumental strategy. Assume that alien or task focused drives win out over potential honesty etc drives because they're favoured by inductive biases. You get convergent deception. I'm guessing you have interpretability working as th main crux and together with inductive biases for nice behaviours potentially winning it drives this story to low probability. Is that right?   A different story with interpretability at least somewhat working would be the following: We again have deception by default because of human reinforcement for sycophancy and looking like solving problems (like in o3) as well as because of inductive biases for alien goals. However this time our interpretability methods work and since the AI is smart enough to know when it's deceptive we can catch correlates of that representation with interpretability techniques.  Within the project developing the AI, the vibes and compute commitments are that the main goal is to go fast and outcompete others and a secondary goals is to be safe which gets maybe 1-10% of resources. So then as we go along we have the deception monitors constantly going off. People debate on what they should do about it. They come to the conclusion that they can afford to do some amount of control techniques and resample the most deceptive outputs, investigate more etc but mostly still use the model. They then find various training techniques that don't directly train on their deception classifier but are evaluated against it and train on some subset of the classifications of another detector which leads to reducin

I'm also hanging out a lot more with normies these days and I feel this.

But I also feel like maybe I just have a very strong local aura (or like, everyone does, that's how scenes work) which obscures the fact that I'm not influencing the rest of the ocean at all.

I worry that a lot of the discourse basically just works like barrier aggression in dogs. When you're at one of their parties, they'll act like they agree with you about everything, when you're seen at a party they're not at, they forget all that you said and they start baying for blood. Go back to... (read more)

I'm saying they (at this point) may hold that position for (admirable, maybe justifiable) political rather than truthseeking reasons. It's very convenient. It lets you advocate for treaties against racing. It's a lovely story where it's simply rational for humanity to come together to fight a shared adversary and in the process somewhat inevitably forge a new infrastructure of peace (an international safety project, which I have always advocated for and still want) together. And the alternative is racing and potentially a drone war between major powers and... (read more)

1Severin T. Seehrich
Huh, that's a potentially significant update for me. Two questions: 1. Can you give me a source for the claim that making the models incapable of deception seems likely to work? I managed to miss that so far. 2. What do you make of Gradual Disempowerment? Seems to imply that even successful technical alignment might lead to doom.

In watching interactions with external groups, I'm... very aware of the parts of our approach to the alignment problem that the public, ime, due to specialization being a real thing, actually cannot understand, so success requires some amount of uh, avoidance. I think it might not be incidental that the platform does focus (imo excessively) on more productive, accessible common enemy questions like control and moratorium, ahead of questions like "what is CEV and how do you make sure the lead players implement it". And I think to justify that we've been for... (read more)

3Severin T. Seehrich
So you think the alignment problem is solvable within the time we appear to have left? I'm very sceptical about that, and that makes me increasingly prone to believe that CEV, at this point in history, genuinely is not a relevant question. Which appears to be a position a number of people in PauseAI hold.
mako yass11-5

Rationalist discourse norms require a certain amount of tactlessness, saying what is true even when the social consequences of saying it are net negative. Politics (in the current arena) requires some degree of deception or at least complicity with bias (lies by ommision, censorship/nonpropagation of inconvenient counterevidence). 

Rationalist forum norms essentially forbid speaking in ways that're politically effective. Those engaging in political outreach would be best advised to read lesswrong but never comment under their real name. If they have go... (read more)

I don't think that effective politics in this case requires deception and deception often backfires in unexpected ways.

Gabriel and Connor suggest in their interview that radical honesty - genuinely trusting politicians, advisors and average people to understand your argument and recognizing that they also don't want to die from ASI - can be remarkably effective. The real problem may be that this approach is not attempted enough. I remember this as a slightly less but still positive datapoint https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speaking-to-con... (read more)

mako yass100

For the US to undertake such a shift, it would help if you could convince them they'd do better in a secret race than an open one. There are indications that this may be possible, and there are indications that it may be impossible.

I'm listening to an Ecosystemics Futures podcast episode, which, to characterize... it's a podcast where the host has to keep asking guests whether the things they're saying are classified or not just in case she has to scrub it. At one point, Lue Elizondo does assert, in the context of talking to a couple of other people who kn... (read more)

2RHollerith
Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.
  • I'll change a line early on in the manual to "Objects aren't common, currently. It's just corpses for now, which are explained on the desire cards they're relevant to and don't matter otherwise". Would that address it? (the card is A Terrible Hunger, which also needs to be changed to "a terrible hunger.\n4 points for every corpse in your possession at the end (killing generally always leaves a corpse, corpses can be carried; when agents are in the same land as a corpse, they can move it along with them as they move)")
  • What's this in response to?
  • Latter. Unsu
... (read more)
2Gunnar_Zarncke
Thanks for the clarifications! The second referred to holes in the landscape mentioned in the post, not in the rules.

I briefly glanced at wikipedia and there seemed to be two articles supporting it. This one might be the one I'm referring to (if not, it's a bonus) and this one seems to suggest that conscious perception has been trained.

I think unpacking that kind of feeling is valuable, but yeah it seems like you've been assuming we use decision theory to make decisions, when we actually use it as an upper bound model to derive principles of decisionmaking that may be more specific to human decisionmaking, or to anticipate the behavior of idealized agents, or (the distinction between CDT and FDT) as an allegory for toxic consequentialism in humans.

mako yass120

I'm aware of a study that found that the human brain clearly responds to changes in direction of the earth's magnetic field (iirc, the test chamber isolated the participant from the earth's field then generated its own, then moved it, while measuring their brain in some way) despite no human having ever been known to consciously perceive the magnetic field/have the abilities of a compass.

So, presumably, compass abilities could be taught through a neurofeedback training exercise.

I don't think anyone's tried to do this ("neurofeedback magnetoreception" finds no results)

But I guess the big mystery is why don't humans already have this.

4Alexander Gietelink Oldenziel
I've heard of this extraordinary finding. As for any extraordinary evidence, the first question should be: is the data accurate? Does anybody know if this has been replicated?
mako yass3-2

A relevant FAQ entry: AI development might go underground

I think I disagree here:

By tracking GPU sales, we can detect large-scale AI development. Since frontier model GPU clusters require immense amounts of energy and custom buildings, the physical infrastructure required to train a large model is hard to hide.

This will change/is only the case for frontier development. I also think we're probably in the hardware overhang. I don't think there is anything inherently difficult to hide about AI, that's likely just a fact about the present iteration of AI.

But I... (read more)

Answer by mako yass215

Personally, because I don't believe the policy in the organization's name is viable or helpful.

As to why I don't think it's viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US's only clear economic lead over the rest of the world.

If you attempted a pause, I think it wouldn't work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn't the frontier.... (read more)

3Wyatt S
The Trump-Vance administration's support base is suspicious of academia, and has been willing to defund scientific research of the grounds of it being too left-wing. There is a schism emerging between multiple factions of the right-wing, the right-wingers that are more tech-oriented and the ones that are nation/race-oriented (the H1B visa argument being an example). This could lead to a decrease in support for AI in the future. Another possibility is that the United States could lose global relevance due to economic and social pressures from the outside world, and from organizational mismanagement and unrest from within. Then the AI industry could move to the UK/EU, turning the main players in AI to the UK/EU and China.
7RHollerith
I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground teams. And if we could make it illegal for these underground teams to generate revenue by selling AI-based services or to raise money from investors, that would bring me great joy, too. Research can be modeled as a series of breakthroughs such that it is basically impossible to make breakthrough N before knowing about breakthrough N-1. If the researcher who makes breakthrough N-1 is unable to communicate it to researchers outside of his own small underground cell of researchers, then only that small underground cell or team has a chance at discovering breakthrough N, and research would proceed much more slowly than it does under current conditions. The biggest hope for our survival is the quite likely and realistic hope that many thousands of person-years of intellectual effort that can only be done by the most talented among us remain to be done before anyone can create an AI that could extinct us. We should be making the working conditions of the (misguided) people doing that intellectual labor as difficult and unproductive as possible. We should restrict or cut off the labs' access to revenue, to investment, to "compute" (GPUs), to electricity and to employees. Employees with the skills and knowledge to advance the field are a particularly important resource for the labs; consequently, we should reduce or restrict their number by making it as hard as possible (illegal preferably) to learn, publish, teach or lecture about deep learning. Also, in my assessment, we are not getting much by having access to the AI researchers: w
3mako yass
A relevant FAQ entry: AI development might go underground I think I disagree here: This will change/is only the case for frontier development. I also think we're probably in the hardware overhang. I don't think there is anything inherently difficult to hide about AI, that's likely just a fact about the present iteration of AI. But I'd be very open to more arguments on this. I guess... I'm convinced there's a decent chance that an international treaty would be enforceable and that China and France would sign onto it if the US was interested, but the risk of secret development continuing is high enough for me that it doesn't seem good on net.

I notice they have a Why do you protest section in their FAQ. I hadn't heard of these studies before

Regardless, I still think there's room to make protests cooler and more fun and less alienating, and when I mentioned this to them they seemed very open to it.

Yeah, I'd seen this. The fact that grok was ever consistently saying this kind of thing is evidence, though not proof, that they actually may have a culture of generally not distorting its reasoning, they could have introduced propaganda policies at training time, it seems like they haven't done that, instead they decided to just insert some pretty specific prompts that, I'd guess, were probably going to be temporary.

It's real bad, but it's not bad enough for me to shoot yet.

There is evidence, literal written evidence, of Musk trying to censor Grok from saying bad things about him

I'd like to see this

7Isopropylpod
https://www.theverge.com/news/618109/grok-blocked-elon-musk-trump-misinformation https://www.businessinsider.com/grok-3-censor-musk-trump-misinformation-xai-openai-2025-2?op=1 The explanation that it was done by "a new hire" is a classic and easy scapegoat. It's much more straight forward to believe Musk himself wanted this done, and walked it back when it was clear it was more obvious than intended. 

I wonder if maybe these readers found the story at that time as a result of first being bronies, and I wonder if bronies still think of themselves as a persecuted class.

Answer by mako yass20

IIRC, aisafety.info is primarily maintained by Rob Miles, so should be good: https://aisafety.info/how-can-i-help

Answer by mako yass20

I'm certain that better resources will arrive but I do have a page for people asking this question on my site, the "what should we do" section. I don't think these are particularly great recommendations (I keep changing them) but it has something for everyone.

These are not concepts of utility that I've ever seen anyone explicitly espouse, especially not here, the place to which it was posted.

3cubefox
Hedonic and desire theories are perfectly standard, we had plenty of people talking about them here, including myself. Jeffrey's utility theory is explicitly meant to model (beliefs and) desires. Both are also often discussed in ethics, including over at the EA Forum. Daniel Kahneman has written about hedonic utility. To equate money with utility is a common simplification in many economic contexts, where expected utility is actually calculated, e.g. when talking about bets and gambles. Even though it isn't held to be perfectly accurate. I didn't encounter the reproduction and energy interpretations before, but they do make some sense.
mako yass1-1

The people who think of utility in the way the article is critiquing don't know what utility actually is, presenting a critque of this tangible utility as a critique of utility in general takes the target audience further away from understanding what utility is.

A Utility function is a property of a system rather than a physical thing (like, eg, voltage, or inertia, or entropy). Not being a simple physical substance doesn't make it fictional.

It's extremely non-fictional. A human's utility function encompasses literally everything they care about, ie, everyt... (read more)

Contemplating an argument that free response rarely gets more accurate results for questions like this because listing the most common answers as checkboxes helps respondents to remember all of the answers that're true for of them.

I'd be surprised if LLM use for therapy or sumarization is that low irl, and I'd expect people would've just forgot to mention those usecases. Hope they'll be in the option list this year.

Hmm I wonder if a lot of trends are drastically underestimated because surveyers are getting essentially false statistics from the Other gutter.

2Screwtape
If Other is larger than I expect, I think of that as a reason to try and figure out what the parts of Other are. Amusingly enough for the question, I'm optimistic about solving this by letting people do more free response and having an LLM sift through the responses.

Apparently Anthropic in theory could have released claude 1 before chatgpt came out? https://www.youtube.com/live/esCSpbDPJik?si=gLJ4d5ZSKTxXsRVm&t=335

I think the situation would be very different if they had.

Were OpenAI also, in theory, able to release sooner than they did, though?

2cubefox
Yes, I think they mentioned that GPT-4 finished training in summer, a few months before the launch of ChatGPT (which used a fine-tuned version of GPT-3.5).
4Mateusz Bagiński
Smaller issue but OA did sit on GPT-2 for a few months between publishing the paper and open-sourcing it, apparently due to safety considerations.
mako yass2-5

The assumption that being totally dead/being aerosolised/being decayed vacuum can't be a future experience is unprovable. Panpsychism should be our null hypothesis[1], and there never has and never can be any direct measurement of consciousness that could take us away from the null hypothesis.

Which is to say, I believe it's possible to be dead.

  1. ^

    the negation, that there's something special about humans that makes them eligible to experience, is clearly held up by a conflation of having experiences and reporting experiences and the fact that humans are the o

... (read more)
mako yass*43

I have preferences about how things are after I stop existing. Mostly about other people, who I love, and at times, want there to be more of.

I am not an epicurean, and I am somewhat skeptical of the reality of epicureans.

3cubefox
Exactly. That's also why it's bad for humanity to be replaced by AIs after we die: We don't want it to happen.

It seems like you're assuming a value system where the ratio of positive to negative experience matters but where the ratio of positive to null (dead timelines) experiences doesn't matter. I don't think that's the right way to salvage the human utility function, personally.

2the gears to ascension
I don't think Lucius is claiming we'd be happy about it. Maybe the no anticipated impact carries that implicit claim, I guess.
2cubefox
It's the old argument by Epicurus from his letter to Menoeceus:
mako yass1-2

Okay? I said they're behind in high precision machine tooling, not machine tooling in general. That was the point of the video.

Admittedly, I'm not sure what the significance of this is. To make the fastest missiles I'm sure you'd need the best machine tools, but maybe you don't need the fastest missiles if you can make twice as many. Manufacturing automation is much harder if there's random error in the positions of things, but whether we're dealing with that amount of error, I'm not sure.
I'd guess low grade machine tools also probably require high grade machine tools to make.

mako yass2-1

Fascinating. China has always lagged far behind the rest of the world in high precision machining, and is still a long way behind, they have to buy all of those from other countries. The reasons appear complex.

All of the US and european machine tools that go to china use hardware monitoring and tamperproofing to prevent reverse engineering or misuse. There was a time when US aerospace machine tools reported to the DOC and DOD.

5Alexander Gietelink Oldenziel
I watched the video. It doesnt seem to say that China is behind in machine tooling - rather the opposite: prices are falling, capacity is increasing, new technology is rapidly adopted.
Load More