I agree with many of the parts of this post. I think xkcd was largely right, our brains have one scale and resize our experiences to fit. I think for a lot of people the hardest step is to just notice what things they actually like, and how much, and in what quantities before they habituate.
However, the specific substitutions, ascetic choices, etc. are very much going to vary between people, because we have different preferences. You can often get a lot of economic-efficiency-of-pleasure benefit by embracing the places where you prefer things society...
In the world where AI does put most SWEs out of work or severely curtails their future earnings, how likely is it that the economy stays in a context where USD or other fiat currencies stay valuable, and for how long? At some level we don't normally need to think about, USD has value because the US government demands citizens use that currency to pay taxes, and it has an army and can ruin your life if you refuse.
I've mentioned it before and am glad to see people exploring the possibilities, but I really get confused whenever I try to think about (absolute or relative) asset prices along the path to AGI/ASI.
The version of this phrase I've most often heard is "Rearranging deck chairs on the Titanic."
Keep in mind that we're now at the stage of "Leading AI labs can raise tens to hundreds of billions of dollars to fund continued development of their technology and infrastructure." AKA in the next couple of years we'll see AI investment comparable to or exceeding the total that has ever been invested in the field. Calendar time is not the primary metric, when effort is scaling this fast.
A lot of that next wave of funding will go to physical infrastructure, but if there is an identified research bottleneck, with a plausible claim to being the major bottlen...
Agreed on population. to a first approximation it's directly proportional to the supply of labor, supply of new ideas, quantity of total societal wealth, and market size for any particular good or service. That last one also means that with a larger population, the economic value of new innovations goes up, meaning we can profitably invest more resources in developing harder-to-invent things.
I really don't know how that impact (more minds) will compare to the improved capabilities of those minds. We've also never had a single individual with as much ...
Fair enough, thanks.
My own understanding is that other than maybe writing code, no one has actually given LLMs the kind of training a talented human gets towards becoming the kind of person capable of performing novel and useful intellectual work. An LLM has a lot of knowledge, but knowledge isn't what makes useful and novel intellectual work achievable. A non-reasoning model gives you the equivalent of a top-of-mind answer. A reasoning model with a large context window and chain of thought can do better, and solve more complex problems, but still mo...
Great post. I think the central claim is plausible, and would very much like to find out I'm in a world where AGI is decades away instead of years. We might be ready by then.
If I am reading this correctly, there are two specific tests you mention:
1) GPT-5 level models come out on schedule (as @Julian Bradshaw noted, we are still well within the expected timeframe based on trends to this point)
2) LLMs or agents built on LLMs do something "important" in some field of science, math, or writing
I would add on test 2 that neither have almost all huma...
It's also not clear to me that the model is automatically making a mistake, or being biased, even if the claim is in some sense(s) "true." That would depend on what it thinks the questions mean. For example:
We're not dead yet. Failure is not certain, even when the quest stands upon the edge of a knife. We can still make plans, and keep on refining and trying to implement them.
And a lot can happen in 3-5 years. There could be a terrible-but-not-catastrophic or catastrophic-but-not-existential disaster bad enough to cut through a lot of problem. Specific world leaders could die or resign or get voted out and replaced with someone who is either actually competent, or else committed to overturning their predecessor's legacy, or something else. We could be lucky a...
Exactly, yes.
Also:
In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes.
I came to comment mainly on this claim in the OP, so I'll put it here: In particular, at a glance, horses can reproduce, find their own food and fuel, self-repair, and learn new skills to execute independently or semi-independently. These advantages were not sufficient in practice to save (most) horses from the impact of engines, and I do not see why I should expect humans to fare...
Why do your teachers, parents and other adult authorities tell you to listen to a propaganda machine? Because the propaganda machine is working.
I forget where I read this, but there's a reason they call it the "news" and not the "importants."
I would be interested in this too. My uninformed intuition is that this would be path dependent on what becomes abundant vs scarce, and how fast, with initial owners and regulators making what decisions at which points.
I'm in a similar position as you describe, perspective-wise, and would also like to understand the situation better.
I do think there are good reasons why someone should maybe have direct access to some of these systems, though probably not as a lone individual. I seem to remember a few government shutdown/debt ceiling fight/whatever crises ago, there were articles about how there were fundamentally no systems in place to control or prioritize which bills got paid and which didn't. Money came into the treasury, money left to pay for things, first in f...
And unfortunately, this kind of thinking is extremely common, although most people don't have Gary Marcus' reach. Lately I've been having similar discussions with co-workers around once a week. A few of them are starting to get it, but still most aren't extrapolating but the specific thing I show them.
Ah, ok, then I misread it. I thought this part of the story was that it tested all of the above, then chose one, a mirror life mold, to deploy. My mistake.
Personally I got a little humor from Arthropodic. Reminds me of the observation that AIs are alien minds, and I wouldn't want to contend with a superintelligent spider.
I think this story lines up with my own fears of how a not-quite-worst case scenario plays out. I would maybe suggest that there's no reason for U3 to limit itself to one WMD or one kind of WMD. It can develop and deploy bacteria, viruses, molds, mirror life of all three types, and manufactured nanobots, all at once, and then deploy many kinds of each simultaneously. It's probably smart enough to do so in ways that make it look clumsy in case anyone notices, like its experiments are unfocused and doomed to fail. Depending on the dynamics, this could actual...
we can't know how the costs will change between the first and thousandth fusion power plant.
Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and techn...
I'd say I agree with just about all of that, and I'm glad to see it laid out so clearly!
I just also wouldn't be hugely surprised if it turns out something like designing and building remote-controllable self-replicating globally-deployable nanotech (as one example) is in some sense fundamentally "easy" for even an early ASI/modestly superhuman AGI. Say that's the case, and we build a few for the ASI, and then we distribute them across the world, in a matter of weeks. They do what controlled self-replicating nanobots do. Then after a few months the ASI alre...
I very much agree with the value of not expecting a silver bullet, not accelerating arms race dynamics, fostering cooperation, and recognizing in what ways AGI realism represents a stark break from the impacts of typical technological advances. The kind of world you're describing is a possibility, maybe a strong one, and we don't want to repeat arrogant past mistakes or get caught flat footed.
That said, I think this chain of logic hinges closely on just what "…at least for a while" means in practice, yes? If one side has enough of an AI lead to increase it...
Of course I agree we won't attain any technology that is not possible, tautologically. And I have more than enough remaining uncertainty about what the mind is or what an identity entails that if ASI told me an upload wouldn't be me, I wouldn't really have a rebuttal. But the body and brain are an arrangement of atoms, and healthy bodies correspond to arrangements of atoms that are physically constructable. I find it hard to imagine what fundamental limitation could prevent the rearrangement of old-failing-body atoms into young-healthy-body atoms. If it's a practical limitation of repair complexity, then something like a whole-body-transplant seems like it could bypass the entire question.
I don't think the idea is that happy moments are necessarily outweighed by suffering. It reads to me like it's the idea that suffering is inherent in existence, not just for humans but for all life, combined with a kind of negative utilitarianism.
I think I would be very happy to see that first-half world, too. And depending on how we got it, yeah, it probably wouldn't go wrong in the way this story portrays. But, the principles that generate that world might actually be underspecified in something like the ways described, meaning that they allow for ...
The specifics of what I'm thinking of vary a lot between jurisdictions, and some of them aren't necessarily strictly illegal so much as "Relevant authorities might cause you a lot of problems even if you haven't broken any laws." But roughly speaking, I'm thinking about the umbrella of everything that kids are no longer allowed to do that increase demands on parents compared to past generations, plus all the rules and policies that collectively make childcare very expensive, and make you need to live in an expensive town to have good public schools. Those are the first categories that come to mind for me.
Ah, yes, that does clear it up! I definitely am much more on board, sorry I misread the first time, and the footnote helps a lot.
As for the questions I asked that weren't clear, they're much less relevant now that I have your clarification. But the idea was: I'm off the opinion that we have a lot more know-how buried and latent in all our know-that data such that many things humans have never done or even thought of being able to do could nevertheless be overdetermined (or nearly so) without additional experimental data.
Overall I agree with the statements here in the mathematical sense, but I disagree about how much to index on them for practical considerations. Upvoted because I think it is a well-laid-out description of a lot of peoples' reasons for believing AI will not be as dangerous as others fear.
First, do you agree that additional knowing-that reduces the amount of failure needed to achieve knowing-how?
If not, are you also of the opinion that schools, education as a concept, books and similar storage media, or other intentional methods of imparting know-how ...
I hope it would, but I actually think it would depend on who or what killed whom, how, and whether it was really an accident at all.
If an American-made AI hacked the DOD and nuked Milan because someone asked it to find a way to get the 2026 Olympics moved, then I agree, we would probably get a push back against race incentives.
If a Chinese-made AI killed millions in Taiwan in an effort create an opportunity for China to seize control, that could possibly *accelerate* race dynamics.
I think it's more a matter of Not Enough Dakka plus making it illegal to do those things in what should be reasonable ways. I agree there are economic (and regulatory) interventions that could make an enormous difference, but for various reasons I don't think any government is currently willing and able to implement them at scale. A crisis needs to be a lot more acute to motivate that scale of change.
You would think so, I certainly used to think so, but somehow it doesn't seem to work that way in practice. That's usually the step where my wife does the seasoning and adds the liquids, so IDK if there is something specific she does that makes it work. But I'm definitely whipping them with the whisk attachment, which incorporates air, and not beating them with a paddle attachment. I suspect that's the majority of why it works.
I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That's because the expected value is the integral of r*F(r), not the value or integral of F(r).
I believe the expected value...
So, as you noted in another comment, this depends on your understanding of the nature of the types of errors individual perturbations are likely to induce. I was automatically guessing many small random perturbations that could be approximated by a random walk, under the assumption that any systematic errors are the kind of thing the sniper could at least mostly adjust for even at extreme range. Which I could be easily convinced is completely false in ways I have no ability to concretely anticipate.
That said, whatever assumptions I make about the kinds of ...
I used to use a ricer, but found that it always made the potatoes too cold by the time I ate them. Do you find this? If not, do you (even if you never thought of it this way) do anything specific to prevent it? If so, do you then reheat them, and how?
With a stand mixer and the whisk attachment I found removing the ricer step hasn't really mattered, but any other whipping method and yeah, it's very useful.
Fair enough, I moved into a small space a few years ago and mostly buy smaller quantities now. I also like that the Little Potato Company's potatoes are already washed and I'm often boondocking/on a limited water supply.
Costco is generally above average in most things, so definitely a good choice. I find the brands I mentioned to be more consistently high quality across locations and over time, but not too much better at their respective bests. So when I need a specific meal to be high quality, like on holidays, I'll make sure to go to Trader Joe's.
F...
In my experience that's true for a hand-held masher or hand mixer, but if I'm slow-whipping in a stand mixer with butter and cream, golds give a fluffier, smoother, lighter result.
I really enjoyed this piece, not because of the specific result, but because of the style of reasoning it represents. How much advantage, under what kind of rules, can be overcome with what level of intelligence?
Sometimes the answer is none. "I play x" overwhelms any level of intelligence at tic tac toe.
In larger and more open games the advantage of intelligence increases, because you can do more by being better at exploring the space of possible moves.
"Real life" is plausibly the largest and most open game, where the advantage of intelli...
The latter. And yes, I do agree with the superior on that specific, narrow mathematical question. If I am trying to run with the spirit and letter of the dilemma as presented, then I will bite that bullet (sorry, I couldn't resist).
In real world situations, at the point where you somehow find yourself in such a position, the correct solution is probably "call in air support and bomb them instead, or find a way to fire many bullets at once- you've already decided you're willing to kill a child for a chance to to take out the target."
Similarly, if the ...
Interesting, why don't you like them for mashing? That's specifically what I like them best for. Although IIUC a knish needs a different texture to hold together well. I also don't use golds for (unbreaded) potato cakes unless I mash them in advance and use them left over.
I'm no chef, but I love to cook, and my thanksgiving meals are planned in spreadsheets with 10 minute increments of what goes where. Plus I currently live full-time in an RV so I've gotten used to improvising with nonstandard and less reliable tools. Take or leave my suggestions accordingly.
It's often a good idea, until and unless you know your oven really well, to put an oven thermometer in the oven on the rack and adjust accordingly. They're <$10. Try placing it in different spots and figure out how evenly or unevenly your oven heats, and how a pan in...
Yukon Golds are objectively the best potato
Correct :-)
Do you have a specific type of gold you use? The best I can reliably get are the organic golds from Trader Joe's, they come in a 3 lb bag. When I'm making home fries or anything diced I also really like The Little Potato Company.
Edit to add: now you have me wanting to make potatoes au gratin. It's been a while since I've made a good cheese sauce.
I think the principle is fine when applied to how variables effect movement of the bullet in space. I don't necessarily think it means taking the shot is the right call, tactically.
Note: I've never fired a real gun in any context, so a lot of the specifics of my reasoning are probably wrong but here goes anyway.
Essentially I see the POI as stating that the bullet takes a 2D random walk with unknown step sizes (though possibly with a known distribution of sizes) on its way to the target. As distance increase, variance in the random walk increases.
Give...
To the extent this reasoning works, it breaks the moment any agent has anything like a decisive strategic advantage. And that point no one else's instrumental or terminal goals can act as constraints.
This seems like it goes way too far. What exactly are we punishing the server or restaurant for if a patron drops some bills on the table and walks out when no one is looking?
Edit to add: Just thinking about the converse, you could also make it sound more ridiculous by rewriting it with more obscure parts of the legendarium, too.
Conquer Morgoth with Ungoliant. Turn Maiar into balrogs. Glamdring among the morgul-blades.
I would assume that his children in particular would be quite familiar with their usage, though, and that seems to be who a lot of the legendarium-heavy letters are written to.
I also think that it sounds at least slightly less ridiculous to rewrite that passage in the language of Star Wars rather than Starcraft. Conquer the Emperor with the Dark Side. Turn Jedi into Sith. An X-Wing among the TIE fighters. Probably because it's more culturally established, with a more deeply developed mythos.
How does this interact with MA's salary transparency laws? If you are in a role where no one else shares your title, then no problem. Otherwise, this could enable an employer to pressure others to take pay cuts or smaller raises, or it could force them to tell prospective new employees a much lower lower bound in the salary range for the role they're applying to.
The the first objection: To the extent that AGI participates in status games with humans, they will win. They'll be better at it than we are. You could technically have a gang of toddlers play basketball with an NBA all star team, but I don't think you can really say they're competing with each other, or that they're both playing the same game in the sense the people talking about status games mean it.
To the second objection: It is not at all clear to me whether any biological intelligence augmentation path puts humans on a level playing field with AI syst...
FWIW I think it probably would be, between those two. Land and houses are different, even if we usually buy them together. When I bought my first house, the appraisal included separate line items for the house vs the land it was on, and the land was a majority of the price I was paying. I don't know what the OP actually meant, but to my own thinking, owning land (in the limit of advanced technology making everything buildable and extractable) means owning some fixed share of Earth's total supply of energy, water, air, and minerals. Building a house, given ...
One pet peeve of mine is that actual weather forecasts for the public don't disambiguate interpretations of rain chance. Is it the chance of any rain at some point in that day or hour? Is it the expected proportion of that day or hour during which it will be raining?
I sympathize with this viewpoint, and it's hardly the worst outcome we could end up with. But, while both authors would seem to agree with a prohibition on calling up gods in a grab for power, they do so with opposite opinions about the ultimate impact of doing so. Neither offers a long-term possibility of humans retaining life, control, and freedom.
For Tolkien, I would point out first that the Elves successfully made rings free of Sauron's influence. And second, that Eru Iluvatar's existence guarantees that Sauron and Morgoth can never truly win, and at o...
I suppose, but 1) there has been no build-up/tolerance, the effects from a given dose have been stable, 2) there are no cravings for it or anything like that, 3) I've never had anything like withdrawal symptoms when I've missed a dose, other than a reversion to how I was for the years before I started taking it at all. What would a chemical dependency actually mean in this context?
My depression symptoms centered on dulled emotions and senses, and slowed thinking. This came on gradually over about 10 years, followed by about 2 years of therapy with little t...
Thanks for writing this. I said a few years ago, at the time just over half seriously, that there could be a lot of value in trying to solve non-AI-related problems even on short timelines, if our actions and writings become a larger part of the data on which AI is trained and through which it comes to understand the world.
That said, this one gives me pause in particular:
I think that in the context of non-human minds of any kind, it is especially important to aim for the platinum rule and not the golden. We want to treat them the way they would want to be treated, and vice versa.