I agree feedback is a big part of it. For example, the times in my life when I've been most motivated to play musical instruments were when I had regular opportunities to play in front of people. Whenever that disappeared, the interest went away too.
But also I think some of it is sticky, or due to personality factors. We could even say it's not about willpower at all, but about value differences. Some people are just more okay with homeostasis, staying at a certain level (which can be lower or higher for different people) and using only as much effort as n...
Good post. But I thought about this a fair bit and I think I disagree with the main point.
Let's say we talk about two AIs merging. Then the tuple of their expected utilities from the merge had better be on the Pareto frontier, no? Otherwise they'd just do a better merge that gets them onto the frontier. Which specific point on the frontier is a matter of bargaining, but the fact that they want to hit the frontier isn't, it's a win-win. And the merges that get them to the frontier are exactly those that output a EUM agent. If the point they want to hit is ...
"Apparatchik" in the USSR was some middle-aged Ivan Ivanovich who'd yell at you in his stuffy office for stepping out of line. His power came from the party apparatus. While the power of Western activists is the opposite: it comes from civil society, people freely associating with each other.
This rhetorical move, calling a Western thing by an obscure and poorly fitting Soviet name, is a favorite of Yarvin: "Let's talk about Google, my friends, but let's call it Gosplan for a moment. Humor me." In general I'd advise people to stay away from his nonsense, it's done enough harm already.
On a meta level, I have a narrative that goes something like: LessWrong tried to be truth-seeking, but was scared of discussing the culture war, so blocked that off. But then the culture war ate the world, and various harms have come about from not having thought clearly about that (e.g. AI governance being a default left-wing enterprise that tried to make common cause with AI ethics). Now cancel culture is over and there are very few political risks to thinking about culture wars, but people are still scared to. (You can see Scott gradually dipping his to...
The objection I'm most interested in right now is the one about induced demand (that's not the right term but let's roll with it). Like, let's say we build many cheap apartments in Manhattan. Then the first bidders for them will be rich people - from all over the world! - who would love to get a Manhattan apartment for a bargain price. The priced-out locals will stay just as priced out, shuffled to the back of the line, because there's quite many rich people in the world who are willing to outbid them. Maybe if we build very many apartments, and not just i...
Maybe you're pushing your proposal a bit much, but anyway as creative writing it's interesting to think about such scenarios. I had a sketch for a weird utopia story where just before the singularity, time stretches out for humans because they're being run at increasing clock speed, and the Earth's surface also becomes much larger and growing. So humanity becomes this huge, fast-running civilization living inside an AI (I called it "Quetzalcoatl", not sure why) and advising it how it should act in the external world.
My wife used to have a talking doll that said one phrase in a really annoying voice. Well, at some point the doll short-circuited or something, and started turning on at random times. In the middle of the night for example it would yell out its phrase and wake everyone up. So eventually my wife took the doll to the garbage dump. And on the way back she couldn't stop thinking about the doll sitting there in the garbage, occasionally yelling out its phrase: "Let's go home! I'm already hungry!" This isn't creative writing btw, this actually happened.
The thread about Tolkien reminded me of Andrew Hussie's writing process. Start by writing cool scenes, including any elements you like. A talking tree? Okay. Then worry about connecting it with the story. The talking tree comes from an ancient forest and so on. And if you're good, the finished story will feel like it always needed a talking tree.
I'd be really interested in a similar breakdown of JK Rowling's writing process, because she's another author with a limitless "toybox".
I think something like the Culture, with aligned superintelligent "ships" keeping humans as basically pets, wouldn't be too bad. The ships would try to have thriving human societies, but that doesn't mean granting all wishes - you don't grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you'd be able to move up the spectrum from human to ship.
Maybe tangential, but this reminded me of a fun fact about Hong Kong's metro: it's funded by land value. They put a station and get some land development rights near it. Well, building the station obviously makes land around it more valuable. So they end up putting stations where they'd be most useful, and fares can be cheap because the metro company makes plenty of money from land. So the end result is cheap, well-planned public transport which is profitable and doesn't take government money.
Not to pick on you specifically, but just as a general comment, I'm getting a bit worried about the rationalist book review pipeline. It seems it usually goes like this: someone writes a book with an interesting idea -> a rationalist (like Scott) writes a review of it, maybe not knowing much about the topic but being intrigued by the idea -> lots of other rationalists get the idea cached in their minds. So maybe it'd be better if book reviews were written by people who know a lot about the topic, and can evaluate the book in context.
Like, a while ago...
I think I can destroy this philosophy in two kicks.
Kick 1: pleasure is not one-dimensional. There are different parts of your brain that experience different pleasures, with no built-in way to compare between them.
When you retreat from kick 1 by saying "my decision-making provides a way to compare, the better pleasure is the one I'll choose when asked", here comes kick 2: your decision-making won't work for that. There are compulsive behaviors that people want to do but don't get much pleasure from them. And in every decision there's a possible component o...
Good point. But I think the real game changer will be self-modification tech, not longevity tech. In that case we won't have a "slow adaptation" problem, but we'll have a "fast adaptation in weird directions" problem which is probably worse.
In Copenhagen every street has wide sidewalks and bike lanes in both directions, and there's lots of public transport too. It's good.
I don't understand Eliezer's explanation. Imagine Alice is hard-working and Bob is lazy. Then Alice can make goods and sell them to Bob. Half the money she'll spend on having fun, the other half she'll save. In this situation she's rich and has a trade surplus, but the other parts of the explanation - different productivity between different parts of Alice (?) and inability to judge her own work fairly (?) - don't seem to be present.
No. Committing a crime inflicts damage. But interacting with a person who committed a crime in the past doesn't inflict any damage on you.
Because the smaller measure should (on my hypothesis) be enough to prevent crime, and inflicting more damage than necessary for that is evil.
Because otherwise everyone will gleefully discriminate against them in every way they possibly can.
I think the US has too much punishment as it is, with very high incarceration rate and prison conditions sometimes approaching torture (prison rape, supermax isolation).
I'd rather give serial criminals some kind of surveillance collars that would detect reoffending and notify the police. I think a lot of such people can be "cured" by high certainty of being caught, not by severity of punishment. There'd need to be laws to prevent discrimination against people with collars, though.
Yeah, I stumbled on this idea a long time ago as well. I never drink sugary drinks, my laptop is permanently in grayscale mode and so on. And it doesn't feel like missing out on fun; on the contrary, it allows me to not miss out. When I "mute" some big, addictive, one-dimensional thing, I start noticing all the smaller things that were being drowned out by it. Like, as you say, noticing the deliciousness of baked potatoes when you're not eating sugar every day, or noticing all the colors in my home and neighborhood when my screen is on grayscale.
I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.
I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.
This might be obvious, but I don't think we have evidence to support the idea that there really is anything like a concrete plan. All of the statements I've seen from Sam on this issue so far are incredibly basic and hand-wavy.
I suspect that any concrete plan would be fairly controversial, so it's easiest to speak in generalities. And I doubt there's anything like an internal team with some great secret macrostrategy - instead I assume that they haven't felt pressured to think through it much.
I also agree with all of this.
For what an okayish possible future could look like, I have two stories in mind:
Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.
Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.
A post-AI world where baseline humans are anything more than hou...
Thanks for writing this, it's a great explanation-by-example of the entire housing crisis.
Well, Christianity sometimes spread by conquest, but other times it spread peacefully just as effectively. Same for democracy. So I don't think the spread of moral values requires conquest.
Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I'd be very happy to see a world such as in the first half of your story, and don't think it would lead to the second half.
Your theory would predict that we'd be much better at modeling tigers (which hunted us) than at modeling antelopes (which we hunted), but in reality we're about equally bad at modeling either, and much better at modeling other humans.
I don't think this post addresses the main problem. Consider the exchange ratio between labor and land. You need land to live, and your food needs land to be grown. Will you be able to afford more land use for the same work hours, or less? (As programmer, manager, CEO, super high productivity job, whatever.) Well, if the same land can be used to run AIs that can do your job N times over, then from your labor you won't be able to afford it, and that closes the case.
So basically, the only way the masses can survive long term is by some kind of handouts. It won't just happen by itself due to tech progress and economic laws.
I don't buy it. Lots of species have predators and have had them for a long time, but very few species have intelligence. It seems more likely that most of our intelligence is due to sexual selection, a Fisherian runaway that accidentally focused on intelligence instead of brightly colored tails or something.
An ASI project would be highly distinguishable from civilian AI applications and not integrated with a state’s economy
Why? I think there's a smooth ramp from economically useful AI to superintelligence: AIs gradually become better at many tasks, and these tasks help more and more with improving AI in turn.
For cognitive enhancement, maybe we could have a system like "the smarter you are, the more aligned you must be to those less smart than you"? So enhancement would be available, but would make you less free in some ways.
I think the problem with WBE is that anyone who owns a computer and can decently hide it (or fly off in a spaceship with it) becomes able to own slaves, torture them and whatnot. So after that technology appears, we need some very strong oversight - it becomes almost mandatory to have a friendly AI watching over everything.
What about biological augmentation of intelligence? I think if other avenues are closed, this one can still go pretty far and make things just as weird and risky. You can imagine biological self-improving intelligences too.
So if you're serious about closing all avenues, it amounts to creating a god that will forever watch over everything and prevent things from becoming too smart. It doesn't seem like such a good idea anymore.
Sure. But in an economy with AIs, humans won't be like Bob. They'll be more like Carl the bottom-percentile employee who struggles to get any job at all. Even in today's economy lots of such people exist, so any theoretical argument saying it can't happen has got to be wrong.
And if the argument is quantitative - say, that the unemployment rate won't get too high - then imagine an economy with 100x more AIs than people, where unemployment is only 1% but all people are unemployed. There's no economic principle saying that can't happen.
That, incidentally, implies that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.
Less skilled labor has a well-paying niche today?
Yeah, on further thought I think you're right. This is pretty pessimistic then, AI companies will find it easy to align AIs to money interests, and the rest of us will be in a "natives vs the East India Company" situation. More time to spend on alignment then matters only if some companies actually try to align AIs to something good instead, and I'm not sure any companies will do that.
I wonder how hard it would be to make the Sun stop shining? Maybe the fusion reaction could be made subcritical by adding some "control rod" type stuff.
Edit: I see other commenters also mentioned spinning up the Sun, which would lower the density and stop the fusion. Not sure which approach is easier.
I guess the opposite point of view is that aligning AIs to AI companies' money interests is harmful to the rest of us, so it might actually be better if AI companies didn't have much time to do it, and the AIs got to keep some leftover morality from human texts. And WBE would enable the powerful to do some pretty horrible things to the powerless, so without some kind of benevolent oversight a world with WBE might be scary. But I'm not sure about any of this, maybe your points are right and mine are wrong.
Huh? Environmentalism means let things work as they naturally worked, not change them to be "reversible" or something else.
There have been many controversies about the World Bank. A good starting point is this paragraph from Naomi Klein's article:
...The truth is that the bank's credibility was fatally compromised when it forced school fees on students in Ghana in exchange for a loan; when it demanded that Tanzania privatise its water system; when it made telecom privatisation a condition of aid for Hurricane Mitch; when it demanded labour "flexibility" in Sri Lanka in the aftermath of the Asian tsunami; when it pushed for eliminating food subsidies in post-invasion Iraq. Ecuado
Fair enough. And it does seem to me like the action will be new laws, though you're right it's hard to predict.
This one isn't quite a product though, it's a service. The company receives a request from a criminal: "gather information about such-and-such person and write a personalized phishing email that would work on them". And the company goes ahead and does it. It seems very fishy. The fact that the company fulfilled the request using AI doesn't even seem very relevant, imagine if the company had a staff of secretaries instead, and these secretaries were willing to make personalized phishing emails for clients. Does that seem like something that should be legal?...
Yeah, this is really dumb. I wonder if it would've gone better if the AI profiles had been more honest to begin with, using actual datacenter photos as their profile pics and so on.
Are AI companies legally liable for enabling such misuse? Do they take the obvious steps to prevent it, e.g. by having another AI scan all chat logs and flag suspicious ones?
For every person saying "religion gave me a hangup about sex" there will be another who says "religion led to me marrying younger" or "religion led me to have more kids in marriage". The right question is whether religion leads to more anti-reproduction attitude on average, but I can't see how that can be true when religious people have higher fertility.
I've held this view for years and am even more pessimistic than you :-/
In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out.
Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.
Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant
It seems really hard to think of any examples of such tech.
But many do maintain an explicit approval hierarchy that ranks celibacy and sexual restraint above typical sexual behavior
I think we just disagree here. The Bible doesn't say married people shouldn't have sex, and no prominent Christians say that either. There are norms against nonmarital sex, and there are norms against priests having sex, but between these things you draw a connection and generalization to all people which doesn't sound right to me.
There is text in the bible that strongly suggests the new testament set up celibacy as morally superior to sex within marriage. In practice, this mostly only one-shotted autists who got "yay bible" from their social group, and read the bible literally, but didn't read enough of the bible to realize that it is a self-contradicting mess.
You can "un self contradict" the bible, maybe, with enough scholarship such that people who learn the right interpretative schemes can learn about how maybe Paul's stuff shouldn't be taken as seriously as the red text, and ha...
Yeah, I missed a big part of your point on that. But another part maybe I didn’t? Your post started out talking about norms against nonmarital sex. Then you jump from that to saying they’re norms against reproduction - which doesn't sound right, religious people reproduce fine. And then you say (unless I'm missing something) that they're based on hypocrisy, enabling other people to not follow these norms, which also doesn't sound right.
I think this is wrong. First you say that celibacy would be pushed on lower status people like peasants, then you say it would be pushed on higher status people like warriors. But actually neither happens: it's not to the group's advantage (try to explain how making peasants or warriors celibate would advantage the group - you can't), and we don't find major religions doing it either, they are pro-fertility for almost all people. Celibacy of priests is an exception, but it's small and your explanations don't work for it either.
I don't think I made those claims. I did say that clerics are often supposed to be celibate, and warriors are generally supposed to move towards danger, in a single sentence, so I see how those claims might have been confused.
The general pattern I'm pointing out is that some scarce resources, or the approval which is a social proxy for such resources, are allocated preferentially to people who adopt an otherwise perverse preference. These systems are only sustainable with large amounts of hypocrisy, where people are on the whole "bad" rather than "good" ac...
I think they meant that when people are afraid to lose their jobs, they spend less, leading to less demand for other people's work.
Your examples sound familiar to me too, but after rereading your comment and mine, maybe it all can be generalized in a different way. Namely, that internal motivation leads to a low level of effort: reading some textbooks now and then, solving some exercises, producing some small things. It still feels a bit like staying in place. Whereas it takes external motivation to actually move forward with math, or art, or whatever - to spend lots of effort and try to raise my level every day. That's how it feels for me. Maybe some people can do it without external motivation, or maybe they lucked into getting external motivation in the right way, I don't know.