You might be[1] overestimating the popularity of "they are playing god" in the same way you might overestimate the popularity of woke messaging. Loud moralizers aren't normal people either. Messages that appeal to them won't have the support you'd expect given their volume.
Compare, "It's going to take your job, personally". Could happen, maybe soon, for technophile programmers! Don't count them out yet.
Not rhetorical -- I really don't know
Eliezer Yudkowsky wrote a story Kindness to Kin about aliens who love(?) their family members proportionally to the Hamilton's "I'd lay down my life for two brothers or eight cousins" rule. It gives an idea to how alien it is.
Then again, Proto-Indo-European had detailed family words that correspond rather well to confidence of genetic kinship, so maybe it's a cultural thing.
Sure, I think that's a fair objection! Maybe, for a business, it may be worth paying the marginal security costs of giving 20 new people admin accounts, but for the federal government that security cost is too high. Is that what people are objecting to? I'm reading comments like this:
Yeah, that's beyond unusual. It's not even slightly normal. And it is in fact very coup-like behavior if you look at coups in other countries.
And, I just don't think that's the case. I think this is pretty-darn-usual and very normal in the management consulting / private equity world.
I don't think foreign coups are a very good model for this? Coups don't tend to start by bringing in data scientists.
What I'm finding weird is...this was the action people thought worrying enough to make it to the LessWrong discussion. Cutting red tape to unblock data scientists in cost-cutting shakeups -- that sometimes works well! Assembling lists of all CIA officers and sending them emails, or trying to own the Gaza strip, or <take your pick>. I'm far mode on these, have less direct experience, but they seem much more worrying. Why did this make the threshold?
Huh, I came at this with the background of doing data analysis in large organizations and had a very different take.
You're a data scientist. You want to analyze what this huge organization (US government) is spending its money on in concrete terms. That information is spread across 400 mutually incompatible ancient payment systems. I'm not sure if you've viscerally felt the frustration of being blocked, spending all your time trying to get permission to read from 5 incompatible systems, let alone 400. But it would take months or years.
Fortunately, your boss is exceptionally good at Getting Things Done. You tell him that there's one system (BFS) that has all the data you need in one place. But BFS is protected by an army of bureaucrats, most of whom are named Florence, who are Very Particular, are Very Good at their job, Will Not let this system go down, Will Not let you potentially expose personally identifiably information by violating Section 3 subparagraph 2 of code 5, Will Not let you sweet talk her into bypassing the safety systems she has spent the past 30 years setting up to protect oh-just-$6.13 trillion from fraud, embezzlement, and abuse, and if you manage somehow manage to get around these barriers she will Stop You.
Your boss Gets Things Done and threatens Florence's boss Mervin that if he does not give you absolutely all the permissions you ask for, Mervin will become the particular object of attention of two people named Elon Musk and Donald Trump.
You get absolutely all the permissions you want and go on with your day.
Ah, to have a boss like that!
EDIT TL/DR: I think this looks weirder in Far mode? Near mode (near to data science, not near government), giving outside consultant data scientists admin permissions for important databases does not seem weird or nefarious. It's the sort of thing that happens when the data scientist's boss is intimidatingly high in an organization, like the President/CEO hiring a management consultant.
Checking my understanding: for the case of training a neural network, would S be the parameters of the model (along with perhaps buffers/state like moment estimates in Adam)? And would the evolution of the state space be local in S space? In other words, for neural network training, would S be a good choice for H?
In a recurrent neural networks doing in-context learning, would S be something like the residual stream at a particular token?
I'll conjecture the following in a VERY SPECULATIVE, inflammatory, riff-on-vibes statements:
Stoner-vibes based reason: I'm guessing you can reduce a problem like Horn Satisfiability[2] to gradient descent. Horn Satisfiability is a P-compete problem -- you can transform any polynomial-time decision problem in a Horn Satisfiability problem using a log-space transformation. Therefore, gradient descent is "at least as big as P" (P-hard). And I'm guessing you can your formalization of gradient descent in P as well (hence "P-Complete"). That would mean gradient descent is not be able to solve harder problems in e.g. NP unless P=NP
Horn Satisfiability is about finding true/false values that satisfy a bunch of logic clauses of the form . or (that second clause means "don't set both and to true -- at least one of them has to be false" ). In the algorithm for solving it, you figure out a variable that must be set to true or false, then propagate that information forward to other clauses. I bet you can do this with a loss function turning into a greedy search on a hypercube.
Thanks! I'm not a GPU expert either. The reason I want to spread the toll units inside GPU itself isn't to turn the GPU off -- it's to stop replay attacks. If the toll thing is in a separate chip, then the toll unit must have some way to tell the GPU "GPU, you are cleared to run". To hack the GPU, you just copy that "cleared to run" signal and send it to the GPU. The same "cleared to run" signal must always make the GPU work, unless there is something inside the GPU to make sure won't accept the same "cleared to run" signal twice. That the point of the mechanism I outline -- a way to make it so the same "cleared to run" signal for the GPU won't work twice.
Bonus: Instead of writing the entire logic (challenge response and so on) in advance, I think it would be better to run actual code, but only if it's signed (for example, by Nvidia), in which case they can send software updates with new creative limitations, and we don't need to consider all our ideas (limit bandwidth? limit gps location?) in advance.
Hmm okay, but why do I let Nvidia send me new restrictive software updates? Why don't I run my GPUs in an underground bunker, using the old most broken firmware?
I don't think it's system 1 doing the systemization. Evolution beat fear of death into us in lots of independent forms (fear of heights, snakes, thirst, suffocation, etc.), but for the same underlying reason. Fear of death is not just an abstraction humans invented or acquired in childhood; is a "natural idea" pointed at by our brain's innate circuitry from many directions. Utilitarianism doesn't come with that scaffolding. We don't learn to systematize Euclidian and Minkowskian spaces the same way either.