Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.
I saw someone who was worried that AI was gonna cause real economic trouble soon by replacing travel agents. But the advent of the internet made travel agents completely unnecessary, and it still only wiped out half the travel agent jobs. The number of travel agents has stayed roughly the same since 2008!
This reminds me of Patrick McKenzie's tweet thread:
Technology-driven widespread unemployment ("the robots will take all the jobs") is, like wizards who fly spaceships, a fun premise for science fiction but difficult to find examples for in economic history. (The best example I know is for horses.)
I understand people who are extremely concerned by it, but I think you need a theory for it which successfully predicts bank teller employment trends over the last 50 years prior to justifying the number of column inches it gets.
"Haha Patrick you're not going to tell me bank teller employment is up since we made a machine literally called Automated Teller Machine are you." Bessen, 2015:
"Wait why did that happen?" Short version: the ATM makes each bank branch need less tellers to operate at a desired level of service, but the growth of the economy (and the tech-driven decline in cost of bank branch OpEx) caused branch count to outgrow decline in tellers/branch.
One subsubgenre of writing I like is the stress-testing of a field's cutting-edge methods by applying it to another field, and seeing how much knowledge and insight the methods recapitulate and also what else we learn from the exercise. Sometimes this takes the form of parables, like Scott Alexander's story of the benevolent aliens trying to understand Earth's global economy from orbit and intervening with crude methods (like materialising a billion barrels of oil on the White House lawn to solve a recession hypothesised to be caused by an oil shortage) to intuition-pump the current state of psychiatry and the frame of thinking of human minds as dynamical systems. Sometimes they're papers, like Eric Jonas and Konrad P. Kording's Could a Neuroscientist Understand a Microprocessor? (they conclude that no, regardless of the amount of data, "current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems" — "the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor"). Unfortunately I don't know of any other good examples.
I'm not sure about Friston's stuff to be honest.
But Watts lists a whole bunch of papers in support of the blindsight idea, contra Seth's claim — to quote Watts:
(I'm also reminded of DFW's How Tracy Austin Broke My Heart.)
To be clear I'm not arguing that "look at all these sources, it must be true!" (we know that kind of argument doesn't work). I'm hoping for somewhat more object-level counterarguments is all, or perhaps a better reason to dismiss them as being misguided (or to dismiss the picture Watts paints using them) than what Seth gestured at. I'm guessing he meant "complex general cognition" to point to something other than pure raw problem-solving performance.
Thanks, is there anything you can point me to for further reading, whether by you or others?
Peter Watts is working with Neill Blomkamp to adapt his novel Blindsight into an 8-10-episode series:
“I can at least say the project exists, now: I’m about to start writing an episodic treatment for an 8-10-episode series adaptation of my novel Blindsight.
“Neill and I have had a long and tortured history with that property. When he first expressed interest, the rights were tied up with a third party. We almost made it work regardless; Neill was initially interested in doing a movie that wasn’t set in the Blindsight universe at all, but which merely used the speculative biology I’d invented to justify the existence of Blindsight’s vampires. “Sicario with Vampires” was Neill’s elevator pitch, and as chance would have it the guys who had the rights back then had forgotten to renew them. So we just hunkered quietly until those rights expired, and the recently-rights-holding parties said Oh my goodness we thought we’d renewed those already can we have them back? And I said, Sure; but you gotta carve out this little IP exclusion on the biology so Neill can do his vampire thing.
“It seemed like a good idea at the time. It was good idea, dammit. We got the carve-out and everything. But then one of innumerable dead-eyed suits didn’t think it was explicit enough, and the rights-holders started messing us around, and what looked like a done deal turned to ash. We lost a year or more on that account.
“But eventually the rights expired again, for good this time. And there was Neill, waiting patiently in the shadows to pounce. So now he’s developing both his Sicario-with-vampires movie and an actual Blindsight adaptation. I should probably keep the current status of those projects private for the time being. Neill’s cool with me revealing the existence of the Blindsight adaptation at least, and he’s long-since let the cat out of the bag for his vampire movie (although that was with some guy called Joe Rogan, don’t know how many people listen to him). But the stage of gestation, casting, and all those granular nuts and bolts are probably best kept under wraps for the moment.
“What I can say, though, is that it feels as though the book has been stuck in option limbo forever, never even made it to Development Hell, unless you count a couple of abortive screenplays. And for the first time, I feel like something’s actually happening. Stay tuned.”
When I first read Blindsight over a decade ago it blew my brains clean out of my skull. I'm cautiously optimistic about the upcoming series, we'll see...
There's a lot of fun stuff in Anders Sandberg's 1999 paper The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains. One particularly vivid detail was (essentially) how the square-cube law imposes itself upon Jupiter brain architecture by forcing >99.9% of volume to be comprised of comms links between compute nodes, even after assuming a "small-world" network structure allowing sparse connectivity between arbitrarily chosen nodes by having them be connected by a short series of intermediary links with only 1% of links being long-range.
For this particular case ("Zeus"), a 9,000 km sphere of nearly solid diamondoid consisting mainly of reversible quantum dot circuits and molecular storage systems surrounded by a concentric shield protecting it from radiation and holding radiators to dissipate heat into space, with energy provided by fusion reactors distributed outside the shield, only the top 1.35 km layer is compute + memory (a lot thinner comparatively than the Earth's crust), and the rest of the interior is optical comms links. Sandberg calls this the "cortex model".
In a sense this shouldn't be surprising since both brains and current semiconductor chips are mostly interconnect by volume already, but a 1.35 km thick layer of compute + memory encompassing a 9,000 km sphere of optical comms links seems a lot more like a balloon to me than anything, so from now on I'll probably think of them as Jupiter balloons.
Venkatesh Rao's recent newsletter article Terms of Centaur Service caught my eye for his professed joy of AI-assisted writing, both nonfiction and fiction:
In the last couple of weeks, I’ve gotten into a groove with AI-assisted writing, as you may have noticed, and I am really enjoying it. ... The AI element in my writing has gotten serious, and I think is here to stay. ...
On the writing side, when I have a productive prompting session, not only does the output feel information dense for the audience, it feels information dense for me.
An example of this kind of essay is one I posted last week, on a memory-access-boundary understanding of what intelligence is. This was an essay I generated that I got value out of reading. And it didn’t feel like a simple case of “thinking through writing.” There’s stuff in here contributed by ChatGPT that I didn’t know or realize even subconsciously, even though I’ve been consulting for 13 years in the semiconductor industry.
Generated text having elements new to even the prompter is a real benefit, especially with fiction. I wrote a bit of fiction last week that will be published in Protocolized tomorrow that was so much fun, I went back and re-read it twice. This is something I never do with m own writing. By the time I ship an unassisted piece of writing, I’m generally sick of it.
AI-assisted writing allows you to have your cake and eat it too. The pleasure of the creative process, and the pleasure of reading. That’s in fact a test of good slop — do you feel like reading it?
I think this made an impression on me because Venkat's joy contrasts so much to many people's criticism of Sam Altman's recent tweet re: their new creative fiction model's completion to the prompt "Please write a metafictional literary short story about AI and grief", including folks like Eliezer, who said "To be clear, I would be impressed with a dog that wrote the same story, but only because it was a dog". I liked the AI's output quite a lot actually, more than I did Eliezer's (and I loved HPMOR so I should be selected for Eliezer-fiction-bias), and I found myself agreeing with Roon's pushback to him.
Although Roshan's remark that "AI fiction seems to be in the habit of being interesting only to the person who prompted it" does give me pause. While this doesn't seem to be true in the AI vs Eliezer comparison specifically, I do find plausible a hyperpersonalisation-driven near-future where AI fiction becomes superstimuli-level interesting only to the prompter. But I find the contra scenario plausible too. Not sure where I land here.
There's a version of this that might make sense to you, at least if what Scott Alexander wrote here resonates:
I’m an expert on Nietzsche (I’ve read some of his books), but not a world-leading expert (I didn’t understand them). And one of the parts I didn’t understand was the psychological appeal of all this. So you’re Caesar, you’re an amazing general, and you totally wipe the floor with the Gauls. You’re a glorious military genius and will be celebrated forever in song. So . . . what? Is beating other people an end in itself? I don’t know, I guess this is how it works in sports6. But I’ve never found sports too interesting either. Also, if you defeat the Gallic armies enough times, you might find yourself ruling Gaul and making decisions about its future. Don’t you need some kind of lodestar beyond “I really like beating people”? Doesn’t that have to be something about leaving the world a better place than you found it?
Admittedly altruism also has some of this same problem. Auden said that “God put us on Earth to help others; what the others are here for, I don’t know.” At some point altruism has to bottom out in something other than altruism. Otherwise it’s all a Ponzi scheme, just people saving meaningless lives for no reason until the last life is saved and it all collapses.
I have no real answer to this question - which, in case you missed it, is “what is the meaning of life?” But I do really enjoy playing Civilization IV. And the basic structure of Civilization IV is “you mine resources, so you can build units, so you can conquer territory, so you can mine more resources, so you can build more units, so you can conquer more territory”. There are sidequests that make it less obvious. And you can eventually win by completing the tech tree (he who has ears to hear, let him listen). But the basic structure is A → B → C → A → B → C. And it’s really fun! If there’s enough bright colors, shiny toys, razor-edge battles, and risk of failure, then the kind of ratchet-y-ness of it all, the spiral where you’re doing the same things but in a bigger way each time, turns into a virtuous repetition, repetitive only in the same sense as a poem, or a melody, or the cycle of generations.
The closest I can get to the meaning of life is one of these repetitive melodies. I want to be happy so I can be strong. I want to be strong so I can be helpful. I want to be helpful because it makes me happy.
I want to help other people in order to exalt and glorify civilization. I want to exalt and glorify civilization so it can make people happy. I want them to be happy so they can be strong. I want them to be strong so they can exalt and glorify civilization. I want to exalt and glorify civilization in order to help other people.
I want to create great art to make other people happy. I want them to be happy so they can be strong. I want them to be strong so they can exalt and glorify civilization. I want to exalt and glorify civilization so it can create more great art.
I want to have children so they can be happy. I want them to be happy so they can be strong. I want them to be strong so they can raise more children. I want them to raise more children so they can exalt and glorify civilization. I want to exalt and glorify civilization so it can help more people. I want to help people so they can have more children. I want them to have children so they can be happy.
Maybe at some point there’s a hidden offramp marked “TERMINAL VALUE”. But it will be many more cycles around the spiral before I find it, and the trip itself is pleasant enough.
In my corner of the world, anyone who hears "A4" thinks of this:
re: your last remark, FWIW I think a lot of those writings you've seen were probably intuition-pumped by this parable of Eliezer's, to which I consider titotal's pushback the most persuasive.