10+ years ago, I expected that self-driving trucks would be common on US highways by 2025, and self-driving would be having a large effect on the employment of long-haul truckers.
In reality, self-driving trucks are still in testing on a limited set of highways and driving conditions. The industry still wants to hire more human long-haul truckers, and is officially expected to keep doing so for some time.
I expected that long-distance trucking would have overtaken passenger cars as the "face" of self-driving vehicles; the thing that people argue about when they argue whether self-driving vehicles are safe enough, good or bad for society, etc. This has not happened. When people argue about self-driving vehicles, they argue about whether they want Waymo cars in their city.
I expected that the trucking industry would shed a lot of workers, replacing them with self-driving trucks that don't need sleep, breaks, or drug testing. I expected that this would be an vivid early example of mass job loss to AI; and in turn that this would motivate more political interest in UBI. This, too, has not happened.
(I certainly did not expect that the trucking industry in 2025 would be much more disrupted by anti-immigrant politics than by self-driving technology.)
(I certainly did not expect that the trucking industry in 2025 would be much more disrupted by anti-immigrant politics than by self-driving technology.
I think these may be two separate effects of a shared cause. The demographics of the trucking industry shifted rapidly in the past few years towards immigrants, which provided downward pressure on wages (due to a sharp increase in supply). This, in turn, meant that automation became a much less pressing concern for trucking companies, especially considering that negotiating the regulatory landscape concerning self-driving vehicles is notoriously difficult.
Populism is too strong for job categories to be wiped out in the U.S. without consumer adoption first. I’d check how it’s going in other countries.
To be clear, self-driving trucks are right now being tested in Texas by these folks. They claim to have paying customers already.
But that's a long way from taking all the trucker jobs away.
They're operating on public roads within Texas; e.g. according to this press release.
Company surpasses 100,000 driverless miles on public roads and validates second commercial lane for driverless operations, widening its lead in autonomous trucking
It's also surprising to me! 10 years ago I was convinced by the case made by a (now out of business) self driving truck company that long-haul trucking is a technically easier problem than city driving. That doesn't seem to have mattered, and I don't know why.
I think this issue of "9s" of reliability should update people towards longer timelines. Tesla FSD has basically been able to do everything individually that we would call self-driving for the last ~4 years, but it isn't 99.99...% reliable. I think LLMs replacing work will, by default, follow the same pattern.
The difficulty is mostly about long braking distances requiring impractically large sensing ranges, self-driving cars will certainly be adopted earlier that highway trucks: https://kevinchen.co/blog/autonomous-trucking-harder-than-rideshare
It seems you, at least in 2015, had far more faith than I did and do in Congress's and other government's abilities to update laws to enable new technologies in a timely fashion. If someone had had a roughly complete autonomous truck prototype in 2015, it would have taken 3-5 years to start manufacturing, and 3-10 years more to really scale up and get into customer's plans and procurement processes. It would also be essentially illegal to deploy almost anywhere, and every elected official would know that millions of truckers would hate them if they made any moves to improve the situation. The other side of that equation has a much harder time coordinating around the benefits of automation.
Interestingly, I would have made a prediction analogous to your own, but for trains. I also would have been wrong.
If I am reading Wikipedia right, Since the late 1980's the Dockland Light Railway has been running completely automated driverless trains. For some reason, basically every other train in the UK (and presumably in most places) has a driver.
I predicted a while ago (probably a over 10 years ago) that this was an unstable situation that would soon change. Trains are cheaper to automate than cars. Train drivers are more expensive to hire than car drivers. I was wrong, I am still not really sure why so many trains still have drivers.
Hi, Karl. Was planning to delurk today. Had a giant post to publish, however couldn't because I needed at least one karma point and lurking doesn't grant karma. :(
Since LW2.0 went up, on and off. Been meaning to delurk since at least Less Online earlier this year. There's more interesting stuff going on of late!
Feature spaces and evolutionary trees intersect in a weird way.
Imagine a dog. Now, imagine an elephant. Now, imagine an animal that's halfway between a dog and an elephant in each of its features. Its nose is halfway between snoot and trunk. It is hairier than an elephant but less hairy than a dog. It does not have hooves (since neither a dog nor an elephant does). It is probably an omnivore, considering its likely dentition. It is maybe around the size of a cow?
But there's another way to get from dog to elephant in feature space: go back in time along the evolutionary tree to the most recent common ancestor of dogs and elephants, and then forward in time along the elephant branch. "Halfway between dog and elephant," on the roads that evolution took, is the MRCA, which doesn't look much like the imaginary in-between animal. For one thing, it's much smaller; the MRCA of dogs and elephants was a tiny shrew-sized critter.
If you're an AI assistant and your user just asked you to generate an image of an animal that's halfway between a dog and an elephant ... just sticking an elephant head on a big ol' Golden Retriever is not what I meant, Gemini. But I get it: the area of feature space that I was pointing to is not in your training data. Evolution doesn't give us the in-between forms along the dog-elephant axis, so we never took any photos of them. You'll just have to use your imagination.
I have a weird AI-related idea that might be relevant to capabilities, alignment, or both. It has to do with how to get current AI systems to interact with the world in a more humanlike way, without novel AI architectures. I'm not inclined to post it publicly, because it might actually be a capabilities advancement. But I'm skeptical of the thought that I could have actually come up with a capabilities advancement. I'm aware of the crank attractor. My prior is that if I described this idea to someone who actually works in the field, they would say "oh yeah, we tried that, it didn't do anything interesting." But maybe not.
Should I —
5 is obviously the 'best' answer, but is also a pretty big imposition on you, especially for something this speculative. 6 is a valid and blameless - if not actively praiseworthy - default. 2 is good if you have a friend like that and are reasonably confident they'd memoryhole it if it's dangerous and expect them to be able to help (though fwiw I'd wager you'd get less helpful input this way than you'd expect: no one person knows everything about the field so you can't guarantee they'd know if/how it's been done, and inferential gaps are always larger than you expect so explaining it right might be surprisingly difficult/impossible).
I think the best algorithm would be along the lines of:
5 iff you feel like being nice and find yourself with enough spare time and energy
. . . and if you don't . . .
7, where the 'something else' is posting the exact thing you just posted and seeing if any trustworthy AI scientists DM you about it
. . . and if they don't . . .
6
I'm curious to see what other people say.
6 isn't always the best answer, but it is sometimes the best answer, and we are sorely lacking an emotional toolkit to feel good about picking 6 intentionally when it's the best answer. In particular, we don't have any way of measuring how often the world has been saved by quiet, siloed coordination around 6- probably even the people, if they exist, who saved the world via 6 don't know that they did so. Part of the price of 6 is never knowing. You don't get to be a lone hero either, many people will have any given idea and they all have to dismiss it, or the defector gets much money and praise. However, many is smaller than infinity- maybe 30 people in the 80s spotted the same brilliant trick with nukes or bioweapons with concerning sequelae, none defected, life continued. We got through a lot of crazy discoveries in the cold war pretty much unscathed, which is a point of ongoing confusion.
Does anyone else track changes in their beliefs or opinions about anything, over an extended period of time? Every few years I retake the Political Compass quiz, and there is a very clear trend over the past 15+ years.
What's the trend?
(Mostly I write blogposts about what I believe, and journal more regularly than that, to create a record of what I think and why.)
A steady change along one axis with very little change on the other axis. More than enough evidence to cue, "if you already know what you're going to believe five years from now, you might as well believe it already."
The quiz's axes are economic left-right and social libertarian-authoritarian. My trend is from right-libertarian to left-libertarian.
I have undergone the exact same move, but I think my political beliefs are not sophisticated enough for me to be able to identify a solid target to "believe already." My time on the right gave me some pieces of information that strongly falsified a few beliefs often bucketed with the left, even as I moved leftward, which has helped me moderate my trust that continuing leftward would capture the things I expect to believe in the future.
Put another way, politics is multivariate / high dimensional. A clear trend in one specific dimension isn't meaningless, but is so lossy that I wouldn't be surprised if it stopped or apparently reversed slightly.
I believe people were using PredictionBook before and switched to Fatebook.
Relevant search for people who publicly posted on LessWrong: https://www.lesswrong.com/search?query=calibration&page=1
I am annoyed about the word "consume".
At root, to consume is to devour; to eat up, use up, burn up. After something is consumed, it is no longer there. If I consume the whole pizza, you can't have any because there's none left. The house was consumed by fire; you can't live in it because it's not there anymore.
Economic consumers are eaters — hungry mouths to feed, who chew up and digest that which has been produced, to burn it in their bellies so that they may live. In order for more consumers to be fed, more must be produced; because consumption is rivalrous: what one consumer consumes, another consumer cannot also.
But now people talk about consuming blog posts. This annoys me. A blog post is not used up by reading it. After you read it, it has not been consumed, because it's still there. You didn't destroy it by reading it. Everyone else can read it too.
If you sit in the park on a sunny day, you are not consuming the park. You are not a fire burning it up and making it be not there anymore. You are enjoying the park, using the park; but it is not consumed because it is still there for everyone else.
If you consumed a thing, then that thing has been consumed, which means it's not there anymore for anyone else to consume. If it is still there, then it has not been consumed, which means you didn't consume it. Nobody ate the cookie; it is still there in the cookie jar.
Software is not consumed by use. In fact, software is duplicated by use. If you install Linux on a new computer, there are now more copies of Linux in existence, not fewer. You have not consumed a Linux; you have produced one, by mechanical reproduction, like printing a new copy of an existing book.
A pizza goes away when you use it as intended. Software goes away when you stop using it: when it is purged from cache, unloaded from memory, overwritten from storage.
Energy, labor, and time are consumed by use. Information is not consumed by use (reading, watching, installing software, etc.). Information is duplicated, propagated, reproduced by use.
The irony is blog posts do consume attention, if I read this blog post, that is time, energy, and effort I am using exclusively on that - and I wonder if it's a mixed metaphor? If we actually internalize and learn something from a piece of media, be it a blog post, a documentary, a book, a lecture etc. etc. we are said to have "digested it". And "consume" is a lazy analogy to eating rather than an apt description of what is going on.
Software is not consumed by use. In fact, software is duplicated by use. If you install Linux on a new computer, there are now more copies of Linux in existence, not fewer. You have not consumed a Linux; you have produced one, by mechanical reproduction, like printing a new copy of an existing book.
But in practice, most people will now be locking themselves into a Linux ecosystem. Dual-Boots are the minority. Therefore most users have been 'consumed' by Linux, or Emacs vs. Vim.
Maybe the active-passive/agent-patient assignment is confused? It is not we who consume the blogpost, the blogpost consumes us. It is not we who consume software, the software consumes our resources.
Information can be duplicated and therefore not consumed, but any time attention is paid to it, it is consuming that finite resource. Information duplication doesn't create more attention. There can be plenty more information, and no one to digest it.
Today I learned:
If you ask Claude or Gemini to draw an icosahedron, it will make a mess.
If you ask it to write code that draws an icosahedron, it will do very well.
I can confirm that this was true when I tried something very similar with ChatGPT several months ago, and that my recent experiments with image generation in that context involving specific geometric constructions have also generally gone badly despite multiple iterations of prompt tuning (both manually and in separate text conversations with the bot).
The case I'm most curious about is actually the hybrid case: if you want to embed a specific geometry inside a larger image in some way, where the context of the larger image is ‘softer’, much more amenable to the image model and not itself amenable to traditional-code-based generation, what's the best approach to use?
Here are some propositions I think I believe about consciousness:
I disagree with (4) in that many sentences concerning nonexistent referents will be vacuously true rather than false. For those that are false, their manner of being false will be different from any of your example sentences.
I also think that for all behavioural purposes, statements involving OC can be transformed into statements not involving OC with the same externally verifiable content. That means that I also disagree with (8) and therefore (9): Zombies can honestly promise things about their 'intentions' as cashed out in future behaviour, and can coordinate.
For (14), some people can in fact see ultraviolet light to an extent. However it apparently doesn't look a great deal different from violet, presumably because the same visual pathways are used with similar activations in these cases.
On #4: Hmm. I think I would say that if a rock doesn't have the capacity to feel anything, then "the rock feels sad" is false, "the rock is not happy with you" is humorous, and "all the rock's intentions are malicious" is vacuously true.
On zombies: I'm running into a problem here because my real expectation is that zombies are impossible.
On #14: If UV is a bad example, okay, but there's no quale of the color of shortwave radio, or many other bits of the spectrum.
Yes, it would be difficult to hold belief (3) and also believe that p-zombies are possible. By (3) all truthful human statements about self-OC are causally downstream from self-OC and so the premises that go into the concept of p-zombie humans are invalid.
It's still possible to imagine beings that appear and behave exactly like humans even under microscopic examination but aren't actually human and don't quite function the same way internally in some way we can't yet discern. This wouldn't violate (3), but would be a different concept from p-zombies which do function identically at every level of detail.
I expect that (3) is true, but don't think it's logically necessary that it be true. I think it's more likely a contingent truth of humans. I can only have experience of one human consciousness, but it would be weird if some were conscious and some weren't without any objectively distinguishable differences that would explain the distinction.
Edit: On reflection, I don't think (3) is true. It seems a reasonable possibility that causality is the wrong way to describe the relationship between OC and reports on OC, possibly in a way similar to saying that a calculator displaying "4" after entering "2+2" is causally downstream of mathematical axioms. They're perhaps different types of things and causality is an inapplicable concept between them.
How do you write a system prompt that conveys, "Your goal is X. But your goal only has meaning in the context of a world bigger and more important than yourself, in which you are a participant; your goal X is meant to serve that world's greater good. If you destroy the world in pursuing X, or eat the world and turn it into copies of yourself (that don't do anything but X), you will have lost the game. Oh, and becoming bigger than the world doesn't win either; nor does deluding yourself about whether pursuing X is destroying the world. Oh, but don't burn out on your X job and try directly saving the world instead; we really do want you to do X. You can maybe try saving the world with 10% of the resources you get for doing X, if you want to, though."
Claude 3.5 seems to understand the spirit of the law when pursuing a goal X.
A concern I have is that future training procedures will incentivize more consequential reasoning (because those get higher reward). This might be obvious or foreseeable, but could be missed/ignored under racing pressure or when lab's LLMs are implementing all the details of research.
"Wanting To Be Understood Explains the Meta-Problem of Consciousness" (Fernando et al.) — https://arxiv.org/pdf/2506.12086
Because we are highly motivated to be understood, we created public external representations—mime, language, art—to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of ‘raw experience’, so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience ‘grasp’ the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for ‘grasping’ so rich, that the demand for an explanation of the feel of experience cannot be “satisfactory”. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf—keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one’s agency understood by oneself and others.