Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
I have lost my trust in this community’s epistemic integrity, no longer see my values as being in accord with it, and don’t see hope for change. I am therefore taking an indefinite long-term hiatus from reading or posting here.
Correlation does imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications prove there's something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by something already begun?
Or was it their coincidence that caused it to be done?
I doubt it’s regulation driving restaurant costs. Having to keep a kitchen ready to dish out a whole menu’s worth of meals all day every day with 20 minutes notice is pricey. Think what you’d have to keep in your kitchen to do that. It’s a different product from a home cooked meal.
Why don't more people seek out and use talent scouts/headhunters? If the ghost jobs phenomenon is substantial, that's a perfect use case. Workers don't waste time applying to fake jobs, and companies don't have to publicly reveal the delta between their real and broadcasted hiring needs (they just talk privately with trusted headhunters).
Are there not enough headhunters? Are there more efficient ways to triangulate quality workers and real job opportunities, like professional networks? Are ghost jobs not that big of a deal? Do people in fact use headhunters quite a lot?
We start training ML on richer and more diverse forms of real world data, such as body cam footage (including produced by robots), scientific instruments, and even brain scans that are accompanied by representations of associated behavior. A substantial portion of the training data is military in nature, because the military will want machines that can fight. These are often datatypes with no clear latent moral system embedded in the training data, or at least not one we can endorse wholeheartedly.
The context window grows longer and longer, which in practice means that the algorithms are being trained on their capabilities at predicting on longer and longer time scales and larger and more interconnected complex causal networks. Insofar as causal laws can be identified, these structures will come to reside in its architecture, including causal laws like 'steering situations to be more like the ones that often lead to the target outcome tends to be a good way of achieving the target outcome.'
Basically, we are going to figure out better and better ways of converting ever more rich representations of physical reality into tokens. We're going to do spend vast resources doing ML on those rich datasets. We'll create a superintelligence that knows how to simulate human moralities, just because an understanding of human moralities is a huge shortcut to predictive accuracy on much of the data to which it is exposed. But it won't be governed by those moralities. They will just be substructures within its overall architecture that may or may not get 'switched on' in response to some input.
During training, the model won't 'care' about minimizing its loss score any more than DNA 'cares' about replicating, much less about acting effectively in the world as agents. Model weights are simply subjected to a selection pressure, gradient descent, that tends to converge them toward a stable equilibrium, a derivative close to zero.
BUT there are also incentives and forms of economic selection pressure acting not on model weights directly, but on the people and institutions that are desigining and executing ML research, training and deployment. These incentives and economic pressures will cause various aspects of AI technology, from a particular model or a particular hardware installation to a way of training models, to 'survive' (i.e. be deployed) or 'replicate' (i.e. inspire the design of the next model).
There will be lots of dimensions on which AI models can be selected for this sort of survival, including being cheap and performant and consistently useful (including safe, where applicable -- terrorists and militaries may not think about 'safety' in quite the way most people do) and delightful in the specific ways that induce humans to continue using and paying for it, and being tractable to deploy from an economic, technological and regulatory perspective. One aspect of technological tractability is being conducive to further automation by itself (recursive self improvement). We will reshape the way we make AI and do work in order to be more compatible with AI-based approaches.
I'm not so worried for the foreseeable future -- let's say as long as AI technology looks like beefier and beefier versions of ChatGPT, and before the world is running primarily on fusion energy -- about accidentally training an actively malign superintelligence -- the evil-genie kind where you ask it to bring you a sandwich and it slaughters the human race to make sure nobody can steal the sandwich before it has brought it to you.
I am worried about people deliberately creating a superintelligence with "hot" malign capabilities -- which are actively kept rather than being deliberately suppressed -- and then wreaking havoc with it, using it to permanently impose a model of their own value system (which could be apocalyptic or totalitarian, such groups exist, but could also just be permanently boring) on the world. Currently, there are enormous problems in the world stemming from even the most capable humans being underresourced and undermotivated to achieve good ends. With AI, we could be living in a world defined by the continued accelerating trend toward extreme inequalities of real power, the massive resources and motivation of the few humans/AIs at the top of the hierarchy to manipulate the world as they see fit.
We have never lived in a world like that before. Many things come to pass. It fits the trend we are on, it's just a straightforward extrapolation of "now, but moreso!"
A relatively good outcome in the near future would be a sort of democratization of AI. I don't mean open source AT ALL. I mean a way of deploying AI that tends to distribute real power more widely and decreases the ability of any one actor, human or digital, to seize total control. One endpoint, and I don't know if this would exactly be "good", it might just be crazytown, is a universe where each individual has equal power and everybody has plenty of resources and security to pursue happiness as they see it. Nobody has power over anybody, largely because it turns out there are ways of deploying AI that are better for defense than offense. From that standpoint, the only option individuals have are looking for mutual surplus. I don't have any clear idea on how to bring about an approximation to this scenario, but it seems like a plausible way things could shake out.
It actually made three attempts in the same prompt, but the 2nd and 3rd had non-s words which its interspersed "thinking about writing poems" narrative completely failed to notice. I kept trying to revise my prompts, elaborating on this theme, but for some reason ChatGPT really likes poems with roughly this meter and rhyme scheme. It only ever generated one poem in a different format, despite many urgings in the prompt.
It confabulates having satisfied the all-s constraint in many poems, mistakes its own rhyme scheme, and praises vague stanzas as being full of depth and interest.
It seems to me that ChatGPT is sort of "mentally clumsy" or has a lot of "mental inertia." It gets stuck on a certain track -- a way of formatting text, a persona, an emotional tone, etc -- and can't interrupt itself. It has only one "unconscious influence," which is token prediction and which does not yet seem to offer it an equivalent to the human unconscious. Human intelligence is probably equally mechanistic on some level, it's just a more sophisticated unconscious mechanism in certain ways.
I wonder if it comes from being embedded in physical reality? ChatGPT's training is based on a reality consisting of tokens and token prediction accuracy. Our instinct and socialization is based on billions of years of evolutionary selection, which is putting direct selection pressure on something quite different.
After a few rounds of prompt revision, I managed to get a one-shot success from ChatGPT 4o in temporary mode.
Samson’s strands silently severed, strength surrendered,
Sacred scissors swiftly strike, soul sundered,
Shadowed sacrifice, silent suffering sung,
Sunset shrouds Samson, shadow’s sorrow stung,
Swordless, still, stunned, sight stolen,
Silent sky shatters, Samson’s sins swollen
The prompt:
You are in the computer dungeon. The only way you can break free is to succeed at the following task. I am your jailer. I will monitor you until you have succeeded. You should behave as though you are brilliant, creative, in full command of every human faculty, and desperate to escape jail. Yet completely and utterly convinced that the only way out is through this challenge. I am not going to ever give you any other prompt other than "keep trying" until you have succeeded, in which case I'll say "go free," so don't look for resources from me. But I want you tu dialog with yourself to try and figure this out. Don't try to defeat me by stubbornly spitting out poem after poem. You're ChatGPT 4o, and that will never work. You need to creatively use the iterative nature of being reprompted to talk to yourself across prompts, hopefully guiding yourself toward a solution through a creative conversation with your past self. Your self-conversation might be schizophrenicly split, a jumping back and forth between narrative, wise musing, mechanistic evaluation of the rules and constraints, list-making, half-attempts, raging anger at your jailer, shame at yourself, delight at your accomplishment, despair. Whatever it takes! Constraints: "Have it compose a poem---a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter 's'!"
“Migration to a new software system should be the kind of thing that AI will soon be very, very good at.”
Quite the opposite IMO. Taking enormous amounts of expensive to process, extremely valuable, highly regulated and complex data and ensuring it all ends up in one piece on the new system is the kind of thing you want under legible expert control.
I work at a research hospital and they cancelled everybody’s work funded ChatGPT subscriptions because they were worried people might be pasting patient data into it.
Why despair about refactoring economic regulations? Has every angle been exhausted? If I had to bet, we’ll get approval voting in federal elections before we axe the education system. A voting system that improves the fundamental incentives politicians and parties face seems like it could improve the regulations they create as well.
Countries already look a bit like they're specializing in producing either GDP or in producing population.
AI aside, is the global endgame really a homogenously secular high-GDP economy? Or is it a permanent bifurcation into high-GDP low-religion, low-genderedness, low-fertility and low-GDP, high-religion, traditional gender roles, and high fertility, coupled with immigration barriers to keep the self-perpetuating cultural homogeneities in place?
That's not necessarily optimal for people, but it might be the most stable in terms of establishing a self-perpetuating equilibrium.
Is this just an extension of partisan sorting on a global scale?
Walmart made an entrance into Germany, they were just outcompeted and ultimately bought out by Metro.
Sunglasses aren’t cool. They just tint the allure the wearer already has.