All of cousin_it's Comments + Replies

Your examples sound familiar to me too, but after rereading your comment and mine, maybe it all can be generalized in a different way. Namely, that internal motivation leads to a low level of effort: reading some textbooks now and then, solving some exercises, producing some small things. It still feels a bit like staying in place. Whereas it takes external motivation to actually move forward with math, or art, or whatever - to spend lots of effort and try to raise my level every day. That's how it feels for me. Maybe some people can do it without external motivation, or maybe they lucked into getting external motivation in the right way, I don't know.

I agree feedback is a big part of it. For example, the times in my life when I've been most motivated to play musical instruments were when I had regular opportunities to play in front of people. Whenever that disappeared, the interest went away too.

But also I think some of it is sticky, or due to personality factors. We could even say it's not about willpower at all, but about value differences. Some people are just more okay with homeostasis, staying at a certain level (which can be lower or higher for different people) and using only as much effort as n... (read more)

4Viliam
I don't have a coherent theory of motivation, because when I look at myself, it seems that different parts of my life work quite differently. For example, when I used to write SF, the social motivation was very important. I had friends who liked SF books and movies, we talked about that a lot, many of us hoped to write something good one day. So I had a community, and an audience. My first attempts had some small success, which inspired me to work harder and achieve more. And then... at some moment this all went down in flames... skipping the unimportant details, I think my mistake was becoming visibly more successful than some people in our community who had higher social status than me, so I got the "status slap-down" and the community became much less friendly towards me, and I lost an important source of motivation. For some time I was motivated by a desire to prove them wrong (for example I avoided a soft blacklist by publishing my story under a pseudonym), but gradually writing stopped being fun, and I stopped writing. For an opposite example, I am interested in mathematics. Sometimes I read a textbook, do some exercises, try to figure something out. I would prefer to have people around me willing to discuss these topics, but the fact that I don't doesn't stop me at all. Somewhere in the middle is programming. I am motivated to learn new things, but I am not motivated to finish my projects, and I suspect that having an audience would help here. I guess I find learning intrinsically rewarding, but producing things requires a social reward (which I need to receive during the process, not only after having completed the product). With regards to homeostasis, I think I have always tried to achieve more, it's just that my energy and motivation are very limited (more than previously now that I have kids). When I have some free time, I try new things, but often I feel too tired just trying to survive the day. That is, this is partially about me, and partially ab
cousin_itΩ8170

Good post. But I thought about this a fair bit and I think I disagree with the main point.

Let's say we talk about two AIs merging. Then the tuple of their expected utilities from the merge had better be on the Pareto frontier, no? Otherwise they'd just do a better merge that gets them onto the frontier. Which specific point on the frontier is a matter of bargaining, but the fact that they want to hit the frontier isn't, it's a win-win. And the merges that get them to the frontier are exactly those that output a EUM agent. If the point they want to hit is ... (read more)

cousin_it*144

"Apparatchik" in the USSR was some middle-aged Ivan Ivanovich who'd yell at you in his stuffy office for stepping out of line. His power came from the party apparatus. While the power of Western activists is the opposite: it comes from civil society, people freely associating with each other.

This rhetorical move, calling a Western thing by an obscure and poorly fitting Soviet name, is a favorite of Yarvin: "Let's talk about Google, my friends, but let's call it Gosplan for a moment. Humor me." In general I'd advise people to stay away from his nonsense, it's done enough harm already.

2Shankar Sivarajan
What, in your view, distinguishes "civil society" from "party apparatus"? Is it a more meaningful distinction than them speaking English instead of Russian? Also, what is the "harm" you think Yarvin's analogizing the American ruling system to that of the Soviet Union has done?

On a meta level, I have a narrative that goes something like: LessWrong tried to be truth-seeking, but was scared of discussing the culture war, so blocked that off. But then the culture war ate the world, and various harms have come about from not having thought clearly about that (e.g. AI governance being a default left-wing enterprise that tried to make common cause with AI ethics). Now cancel culture is over and there are very few political risks to thinking about culture wars, but people are still scared to. (You can see Scott gradually dipping his to... (read more)

cousin_it*20

The objection I'm most interested in right now is the one about induced demand (that's not the right term but let's roll with it). Like, let's say we build many cheap apartments in Manhattan. Then the first bidders for them will be rich people - from all over the world! - who would love to get a Manhattan apartment for a bargain price. The priced-out locals will stay just as priced out, shuffled to the back of the line, because there's quite many rich people in the world who are willing to outbid them. Maybe if we build very many apartments, and not just i... (read more)

cousin_it*42

Maybe you're pushing your proposal a bit much, but anyway as creative writing it's interesting to think about such scenarios. I had a sketch for a weird utopia story where just before the singularity, time stretches out for humans because they're being run at increasing clock speed, and the Earth's surface also becomes much larger and growing. So humanity becomes this huge, fast-running civilization living inside an AI (I called it "Quetzalcoatl", not sure why) and advising it how it should act in the external world.

1ank
Sounds interesting, cousin_it! And thank you for your comment, it wasn't my intention to be pushy, in my main post I actually advocate to gradually democratically pursue maximal freedoms for all (except agentic AIs, until we'll have mathematical guarantees), I want everything to be a choice. So it's just this strange style of mine and the fact that I'm a foreigner) P.S. Removed the exclamation point from the title and some bold text to make it less pushy

My wife used to have a talking doll that said one phrase in a really annoying voice. Well, at some point the doll short-circuited or something, and started turning on at random times. In the middle of the night for example it would yell out its phrase and wake everyone up. So eventually my wife took the doll to the garbage dump. And on the way back she couldn't stop thinking about the doll sitting there in the garbage, occasionally yelling out its phrase: "Let's go home! I'm already hungry!" This isn't creative writing btw, this actually happened.

The thread about Tolkien reminded me of Andrew Hussie's writing process. Start by writing cool scenes, including any elements you like. A talking tree? Okay. Then worry about connecting it with the story. The talking tree comes from an ancient forest and so on. And if you're good, the finished story will feel like it always needed a talking tree.

I'd be really interested in a similar breakdown of JK Rowling's writing process, because she's another author with a limitless "toybox".

cousin_it*40

I think something like the Culture, with aligned superintelligent "ships" keeping humans as basically pets, wouldn't be too bad. The ships would try to have thriving human societies, but that doesn't mean granting all wishes - you don't grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you'd be able to move up the spectrum from human to ship.

Maybe tangential, but this reminded me of a fun fact about Hong Kong's metro: it's funded by land value. They put a station and get some land development rights near it. Well, building the station obviously makes land around it more valuable. So they end up putting stations where they'd be most useful, and fares can be cheap because the metro company makes plenty of money from land. So the end result is cheap, well-planned public transport which is profitable and doesn't take government money.

Not to pick on you specifically, but just as a general comment, I'm getting a bit worried about the rationalist book review pipeline. It seems it usually goes like this: someone writes a book with an interesting idea -> a rationalist (like Scott) writes a review of it, maybe not knowing much about the topic but being intrigued by the idea -> lots of other rationalists get the idea cached in their minds. So maybe it'd be better if book reviews were written by people who know a lot about the topic, and can evaluate the book in context.

Like, a while ago... (read more)

8particlemania
Not to pick on you specifically, but just as a general comment, I'm getting a bit worried about the rationalist decontextualized content policing. It seems it usually goes like this: someone cultivates an epistemological practice (say how to extract conceptual insights from diverse practices) -> they decide to cross-post their thoughts on a community blog interested in epistemology -> somebody else unfamiliar with the former's body of work comes across it -> interprets it into a pattern they might rightfully have identified as critique-worthy -> dump the criticism there. So maybe it'd be better if comments were written by people who can click through the author's profile to interpret the post in the right context. [Epistemic status of this comment: Performative, but not without substance.]
2adamShimi
I did not particularly intend to do a book review per say, and I don't claim to be an expert on the topic. So completely fine with tagging this in some way as "non-expert" if you wish. Not planning to change how I wrote my posts based on this feedback, as I have no interest in following some arbitrary standard of epistemic expertise for a fun little blog post that will be read by 10 people max.
4Mitchell_Porter
We have downvotes if a review is inane, and we have comments if an expert wants to correct something... And we could have a tag like "Expert Review" for reviews by people who know the topic. 
cousin_it1310

I think I can destroy this philosophy in two kicks.

Kick 1: pleasure is not one-dimensional. There are different parts of your brain that experience different pleasures, with no built-in way to compare between them.

When you retreat from kick 1 by saying "my decision-making provides a way to compare, the better pleasure is the one I'll choose when asked", here comes kick 2: your decision-making won't work for that. There are compulsive behaviors that people want to do but don't get much pleasure from them. And in every decision there's a possible component o... (read more)

1Greenless Mirror
Ouch! I acknowledge the complexity of formalizing pleasure, as well as formalizing everything else related to consciousness. I think it's a technical problem that can be solved by just throwing more thinkoomph at it. Actions and feelings are often weakly connected — as I’ve said, a rational choice for most living beings could be suicide — but I think the development of rationality-as-the-art-of-winning naturally strengthens the correlation between them. At least on some level, compulsions are tied to pleasure and pain, with predictable distortions, like valuing short-term over long-term. And introspectively, I don’t see any barriers to comparing love with orgasm, with good food, with religious ecstasy, all within the same metric, even though I can’t give you numbers for it. If you believe that consciousness has a physical nature, or at least interacts with the physical world, we’ll derive those numbers. It seems to me that the multidimensionality of pleasure doesn’t explain anything because you’ll still need to stuff these parameters into a single utility function to be a coherent agent. If the most efficient way to convert negentropy into pleasure ends up being not “100% orgasm” but “37.2% love, 20.5% sexual arousal, 19.8% mono no aware, 16% humor, and 6.5% glory of fnuplpflupflonium”, then so be it, but I don't really expect it to be true. I can't imagine what alternative you're proposing other than reducing everything to a single metric, or what elements other than qualia you might include in that metric.

Good point. But I think the real game changer will be self-modification tech, not longevity tech. In that case we won't have a "slow adaptation" problem, but we'll have a "fast adaptation in weird directions" problem which is probably worse.

cousin_it*50

In Copenhagen every street has wide sidewalks and bike lanes in both directions, and there's lots of public transport too. It's good.

2lsusr
That sounds like it would be interesting to visit.

I don't understand Eliezer's explanation. Imagine Alice is hard-working and Bob is lazy. Then Alice can make goods and sell them to Bob. Half the money she'll spend on having fun, the other half she'll save. In this situation she's rich and has a trade surplus, but the other parts of the explanation - different productivity between different parts of Alice (?) and inability to judge her own work fairly (?) - don't seem to be present.

2lsusr
It doesn't work at that small of a scale. More generally, this principle doesn't work on any scale too small to support an international industrial economy. It wouldn't even work for trade between different tribes of farmers. This is a phenomenon that you only see at very large scales of human behavior. You need massive coordination failures colliding with each other for these ideas to kick in.
cousin_it0-4

No. Committing a crime inflicts damage. But interacting with a person who committed a crime in the past doesn't inflict any damage on you.

6Said Achmiz
It predictably inflicts damage statistically, however—and (and this is the key part!) it prevents you from affecting that statistical distribution according to your own judgment. It would be as if, for example, you weren’t allowed to drive carefully (or to not drive). Driving is dangerous, right? It’s not guaranteed to harm you, but there’s a certain chance that it will. But we accept this—why? Because you have the option of driving carefully, obeying the rules of the road, not driving when you’re tired or inebriated or when it’s snowing, etc.; indeed, you have the option of not driving at all. But if you were forced to drive, no matter the circumstances, this would indeed constitute, in a quite relevant sense, “inflicting damage”.

Because the smaller measure should (on my hypothesis) be enough to prevent crime, and inflicting more damage than necessary for that is evil.

3Alexander Turok
IMO forcing law abiding citizens to associate with criminals is inflicting damage on them without a necessary justification.

Because otherwise everyone will gleefully discriminate against them in every way they possibly can.

3Alexander Turok
But why's that a bad thing?
cousin_it*110

I think the US has too much punishment as it is, with very high incarceration rate and prison conditions sometimes approaching torture (prison rape, supermax isolation).

I'd rather give serial criminals some kind of surveillance collars that would detect reoffending and notify the police. I think a lot of such people can be "cured" by high certainty of being caught, not by severity of punishment. There'd need to be laws to prevent discrimination against people with collars, though.

1frontier64
This stems from a misunderstanding of how the career-criminal mind works. They don't really care about being caught. They remember how out of the last 40 or so times they walked into Walmart and left with ~$100 in unpaid merchandise they only got caught half the time and the other half of the time they got let off with time served of 10-20 days. Either they get away with it or they gotta wait a couple weeks before they get to try again. Not a big deal either way. So much of the crime plaguing modern America is open and obvious and even caught on camera. It's just that the criminal justice system refuses to punish repeat petty offenders. What punishment do you think someone who has been convicted of stealing 15 times before should get on his 16th conviction?
3Alexander Turok
"There'd need to be laws to prevent discrimination against people with collars, though." Why?

Yeah, I stumbled on this idea a long time ago as well. I never drink sugary drinks, my laptop is permanently in grayscale mode and so on. And it doesn't feel like missing out on fun; on the contrary, it allows me to not miss out. When I "mute" some big, addictive, one-dimensional thing, I start noticing all the smaller things that were being drowned out by it. Like, as you say, noticing the deliciousness of baked potatoes when you're not eating sugar every day, or noticing all the colors in my home and neighborhood when my screen is on grayscale.

cousin_it121

I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.

I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.

5Julian Bradshaw
The only sane version of this I can imagine is where there's either one aligned ASI, or a coalition of aligned ASIs, and everyone has equal access. Because the AI(s) are aligned they won't design bioweapons for misanthropes and such, and hopefully they also won't make all human effort meaningless by just doing everything for us and seizing the lightcone etc etc.
ozziegooen1311

This might be obvious, but I don't think we have evidence to support the idea that there really is anything like a concrete plan. All of the statements I've seen from Sam on this issue so far are incredibly basic and hand-wavy. 

I suspect that any concrete plan would be fairly controversial, so it's easiest to speak in generalities. And I doubt there's anything like an internal team with some great secret macrostrategy - instead I assume that they haven't felt pressured to think through it much. 

cousin_it*142

I also agree with all of this.

For what an okayish possible future could look like, I have two stories in mind:

  1. Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.

  2. Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.

A post-AI world where baseline humans are anything more than hou... (read more)

2Kabir Kumar
Make the (!aligned!) AGI solve a list of problems, then end all other AIs, convince (!harmlessly!) all humans to never make another AI, in a way that they will pass down to future humans, then end itself. 
cousin_it*40

Thanks for writing this, it's a great explanation-by-example of the entire housing crisis.

2jefftk
I think you might find the pushback in the FB comments even more illustrative. Including one where a commenter doesn't want the new construction because it could lure NIMBYs to move in.

Well, Christianity sometimes spread by conquest, but other times it spread peacefully just as effectively. Same for democracy. So I don't think the spread of moral values requires conquest.

Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I'd be very happy to see a world such as in the first half of your story, and don't think it would lead to the second half.

5JBlack
It seems that their conclusion was that no amount of happy moments for people could possibly outweigh the unimaginably large quantity of suffering in the universe required to sustain those tiny flickers of merely human happiness amid the combined agony of a googolplex or more fundamental energy transitions within a universal wavefunction. There is probably some irreducible level of energy transitions required to support anything like a subjective human experience, and (in the context of the story at least) the total cost in suffering for that would be unforgivably higher. I don't think the first half would definitely lead to the second half, but I can certainly see how it could.
2AnthonyC
I don't think the idea is that happy moments are necessarily outweighed by suffering. It reads to me like it's the idea that suffering is inherent in existence, not just for humans but for all life, combined with a kind of negative utilitarianism.  I think I would be very happy to see that first-half world, too. And depending on how we got it, yeah, it probably wouldn't go wrong in the way this story portrays. But, the principles that generate that world might actually be underspecified in something like the ways described, meaning that they allow for multiple very different ethical frameworks and we couldn't easily know in advance where such a world would evolve next. After all, Buddhism exists: Within human mindspace there is an attractor state for morality that aims at self-denial and cessation of consciousness as a terminal value. In some cases this includes venerating beings who vow to eternally intervene/remain in the world until everyone achieves such cessation; in others it includes honoring or venerating those who self-mummify through poisoning, dehydrating, and/or starving themselves.  Humans are very bad at this kind of self-denial in practice, except for a very small minority. AIs need not have that problem. Imagine if, additionally, they did not inherit the pacifism generally associated with Buddhist thought but instead believed, like medieval Catholics, in crusades, inquisitions, and forced conversion. If you train an AI on human ethical systems, I don't know what combination of common-among-humans-and-good-in-context ideas it might end up generalizing or universalizing.
5Davidmanheim
The sequence description is: "Short stories about (implausible) AI dooms. Any resemblance to actual AI takeover plans is purely coincidental.“
cousin_it*40

Your theory would predict that we'd be much better at modeling tigers (which hunted us) than at modeling antelopes (which we hunted), but in reality we're about equally bad at modeling either, and much better at modeling other humans.

cousin_it*269

I don't think this post addresses the main problem. Consider the exchange ratio between labor and land. You need land to live, and your food needs land to be grown. Will you be able to afford more land use for the same work hours, or less? (As programmer, manager, CEO, super high productivity job, whatever.) Well, if the same land can be used to run AIs that can do your job N times over, then from your labor you won't be able to afford it, and that closes the case.

So basically, the only way the masses can survive long term is by some kind of handouts. It won't just happen by itself due to tech progress and economic laws.

2ProgramCrafter
Actually, AIs can use other kinds of land (to suggest from the top of the head, sky islands over oceans, or hot air balloons for a more compact option) to be run, which are not usable by humans. There have to be a whole lot of datacenters to make people short on land - unless there are new large factories built.
cousin_it114

I don't buy it. Lots of species have predators and have had them for a long time, but very few species have intelligence. It seems more likely that most of our intelligence is due to sexual selection, a Fisherian runaway that accidentally focused on intelligence instead of brightly colored tails or something.

2Benquo
The post describes how predation creates a specific gradient favoring better modeling of predator behavior. While fact that most predated species don't develop high intelligence is Bayesian evidence against this explanation, it’s very weak counterevidence because general self-aware intelligence is a very narrow target. More importantly, why would sexual selection specifically target intelligence rather than any other trait? Looking at peacocks, we can see what appears to be an initial predation-driven selection for looking like they had big intimidating eyes on their backs (similar to butterflies), followed by sexual selection amplifying along roughly that same gradient direction.
2Raemon
This seems mostly right, but, seems plausible to me that the predatory/prey cycle was a necessary prerequisite to get us into a basin where intelligence-sexual-selection was a plausible outcome. 

An ASI project would be highly distinguishable from civilian AI applications and not integrated with a state’s economy

Why? I think there's a smooth ramp from economically useful AI to superintelligence: AIs gradually become better at many tasks, and these tasks help more and more with improving AI in turn.

2Mateusz Bagiński
Pages 22-23: It's not obvious to me either. At least in the current paradigm, it seems plausible that a state project of (or, deliberately aimed at) developing ASI would yield a lot of intermediate non-ASI products that would then be dispersed into the economy or military. That's what we've been seeing until now. Are there reasons to expect this not to continue? One reason might be that an "ASI Manhattan Project" would want to keep their development secrets so as to minimize information leakage. But would they keep literally all useful intermediate products to themselves? Even if they reveal some X, civilians play with this X, and conclude that X is useless for the purpose of developing ASI, this might still be a valuable negative result that closes off some until-then-plausible ASI development paths. This is one reason, I think the Manhattan Project is a poor model for a state ASI project. Intermediate results of the original Manhattan Project didn't trickle down into the economy while the project was still ongoing. I'm not claiming that people are unaware of those disanalogies but I expect thinking in terms of an "ASI Manhattan Project" encourages overanchoring on it.
cousin_it*40

For cognitive enhancement, maybe we could have a system like "the smarter you are, the more aligned you must be to those less smart than you"? So enhancement would be available, but would make you less free in some ways.

I think the problem with WBE is that anyone who owns a computer and can decently hide it (or fly off in a spaceship with it) becomes able to own slaves, torture them and whatnot. So after that technology appears, we need some very strong oversight - it becomes almost mandatory to have a friendly AI watching over everything.

1RussellThor
I'm considering a world transitioning to being run by WBE rather than AI so I would prefer not to give everyone "slap drones"  https://theculture.fandom.com/wiki/Slap-drone  To start with the compute will mean few WBE, much less than humans and they will police each other. Later on, I am too much of a moral realist to imagine that there would be mass senseless torturing. For a start if you well protect other em's so you can only simulate yourself, you wouldn't do it. I expect any boring job can be made non-conscious so their just isn't the incentive to do that. At the late stage singularity if you will let humanity go their own way, there is fundamentally a tradeoff between letting "people"(WBE etc) make their own decisions and allowing the possibility of them doing bad things. You also have to be strongly suffering averse vs util  - there would surely be >>> more "heavens" vs "hells" if you just let advanced beings do their own thing.

What about biological augmentation of intelligence? I think if other avenues are closed, this one can still go pretty far and make things just as weird and risky. You can imagine biological self-improving intelligences too.

So if you're serious about closing all avenues, it amounts to creating a god that will forever watch over everything and prevent things from becoming too smart. It doesn't seem like such a good idea anymore.

2Aram Panasenco
It appears that by default, unless some perfect 100% bulletproof plan of aligning it is found, calling superintelligence a galaxy-destroying nuke is an understatement. So if there was some chance of a god forever watching over everything and preventing things from becoming too smart, I'd take it in a heartbeat. Realistically, "watch over everything and prevent things from becoming too smart" is probably too difficult a goal to align, but perhaps a goal like "watch over everything and prevent programs with transformer-based architectures from running on silicone-based chips while keeping all other interference to a minimum" would actually be possible to define without everyone getting atomized. Such a goal would buy humanity some time and also make it obvious to everyone just how close to the edge we are, and how big the stakes.
cousin_it140

Sure. But in an economy with AIs, humans won't be like Bob. They'll be more like Carl the bottom-percentile employee who struggles to get any job at all. Even in today's economy lots of such people exist, so any theoretical argument saying it can't happen has got to be wrong.

And if the argument is quantitative - say, that the unemployment rate won't get too high - then imagine an economy with 100x more AIs than people, where unemployment is only 1% but all people are unemployed. There's no economic principle saying that can't happen.

4Steven Byrnes
The context was: Principle (A) makes a prediction (“…human labor will retain a well-paying niche…”), and Principle (B) makes a contradictory prediction (“…human labor…will become so devalued that we won’t be able to earn enough money to afford to eat…”). Obviously, at least one of those predictions is wrong. That’s what I said in the post. So, which one is wrong? I wrote: “I have opinions, but that’s out-of-scope for this little post.” But since you’re asking, I actually agree with you!! E.g. footnote here:

That, incidentally, implies that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.

Less skilled labor has a well-paying niche today?

2wachichornia
I think he’s talking about coast disease?  https://en.m.wikipedia.org/wiki/Baumol_effect
3Steven Byrnes
The point I’m trying to make here is a really obvious one. Like, suppose that Bob is a really great, top-percentile employee. But suppose that Bob’s roommate Alice is an obviously better employee than Bob along every possible axis. Clearly, Bob will still be able to get a well-paying job—the existence of Alice doesn’t prevent that, because the local economy can use more than one employee.

Yeah, on further thought I think you're right. This is pretty pessimistic then, AI companies will find it easy to align AIs to money interests, and the rest of us will be in a "natives vs the East India Company" situation. More time to spend on alignment then matters only if some companies actually try to align AIs to something good instead, and I'm not sure any companies will do that.

2Nathan Helm-Burger
Yeah, any small group of humans seizing unprecedented control over the entire world seems like a bad gamble to take, even if they start off seeming like decent people. I'm currently hoping we can figure some kind of new governance solution for managing decentralized power while achieving adequate safety inspections. https://www.lesswrong.com/posts/FEcw6JQ8surwxvRfr/human-takeover-might-be-worse-than-ai-takeover?commentId=uSPR9svtuBaSCoJ5P
4Noosphere89
This is also my view of the situation, as well, and is a big portion of the reason why solving AI alignment, which reduces existential risk a lot, is non-trivially likely without further political reforms I don't expect to lead to dystopian worlds (from my values).

I wonder how hard it would be to make the Sun stop shining? Maybe the fusion reaction could be made subcritical by adding some "control rod" type stuff.

Edit: I see other commenters also mentioned spinning up the Sun, which would lower the density and stop the fusion. Not sure which approach is easier.

I guess the opposite point of view is that aligning AIs to AI companies' money interests is harmful to the rest of us, so it might actually be better if AI companies didn't have much time to do it, and the AIs got to keep some leftover morality from human texts. And WBE would enable the powerful to do some pretty horrible things to the powerless, so without some kind of benevolent oversight a world with WBE might be scary. But I'm not sure about any of this, maybe your points are right and mine are wrong.

3RussellThor
Perhaps, depends how it is. I think we could do worse than just have Anthropic have a 2 year lead etc. I don't think they would need to prioritize profit as they would be so powerful anyway - the staff would be more interested in getting it right and wouldn't have financial pressure. WBE is a bit difficult, there needs to be clear expectations, i.e. leave weaker people alone and make your own world https://www.lesswrong.com/posts/o8QDYuNNGwmg29h2e/vision-of-a-positive-singularity There is no reason why super AI would need to exploit normies. Whatever we decide, we need some kind of clear expectations and values regarding what WBE are before they become common, Are they benevolent super-elders, AI gods banished to "just" the rest of the galaxy, the natural life progression of first world humans now?
4Nathan Helm-Burger
In one specific respect I'd like to challenge your point. I think fine-tuning models currently aligns them 'well-enough' to any target point of view. I think that the ethics shown by current LLMs are due to researchers actively putting them there. I've been doing red teaming exercises on LLMs for over a year now, and I find it quite easy to fine-tune them to be evil and murderous. Human texts help them understand morality, but don't make them care enough about it for it to be sticky in the face of fine-tuning.
cousin_it1412

Huh? Environmentalism means let things work as they naturally worked, not change them to be "reversible" or something else.

2momom2
From the disagreement between the two of you, I infer there is yet debate as to what environmentalism means. The only way to be a true environmentalist then is to make things as reversible as possible until such time as an ASI can explain what the environmentalist course of action regarding the Sun should be.
cousin_it117

There have been many controversies about the World Bank. A good starting point is this paragraph from Naomi Klein's article:

The truth is that the bank's credibility was fatally compromised when it forced school fees on students in Ghana in exchange for a loan; when it demanded that Tanzania privatise its water system; when it made telecom privatisation a condition of aid for Hurricane Mitch; when it demanded labour "flexibility" in Sri Lanka in the aftermath of the Asian tsunami; when it pushed for eliminating food subsidies in post-invasion Iraq. Ecuado

... (read more)

Fair enough. And it does seem to me like the action will be new laws, though you're right it's hard to predict.

3Fred Heiding
Great discussion. I’d add that it’s context-dependent and somewhat ambiguous. It’s noteworthy that our work shows that all tested AI models conflict with at least three of the eight prohibited AI practices outlined in the EU’s AI Act. It’s also worth noting that the only real difference between sophisticated phishing and marketing can be the intention, making mitigation difficult. Actions from AI companies to prevent phishing might restrict legitimate use cases too much to be interesting.

This one isn't quite a product though, it's a service. The company receives a request from a criminal: "gather information about such-and-such person and write a personalized phishing email that would work on them". And the company goes ahead and does it. It seems very fishy. The fact that the company fulfilled the request using AI doesn't even seem very relevant, imagine if the company had a staff of secretaries instead, and these secretaries were willing to make personalized phishing emails for clients. Does that seem like something that should be legal?... (read more)

3Dagon
"seem like something that should be legal" is not the standard in any jurisdiction I know.  The distinctions between individual service-for-hire and software-as-a-service are pretty big, legally, and make the analogy not very predictive. I'll take the other side of any medium-term bet about "action will be taken in a hurry" if that action is lawsuit under current laws.  Action being new laws could happen, but I can't guess well enough to have any clue how or when it'd be.

Yeah, this is really dumb. I wonder if it would've gone better if the AI profiles had been more honest to begin with, using actual datacenter photos as their profile pics and so on.

Are AI companies legally liable for enabling such misuse? Do they take the obvious steps to prevent it, e.g. by having another AI scan all chat logs and flag suspicious ones?

3Dagon
No, they're not.  I know of no case where a general-purpose toolmaker is responsible for misuse of it's products. This is even less likely for software, where it's clear that the criminals are violating their contract and using it without permission. None of them, as far as I know, publish specifically what they're doing.  Which is probably wise - in adversarial situations, telling the opponents exactly what they're facing is a bad idea.  They're easy and cheap enough that "flag suspicious uses" doesn't do much - it's too late by the time the flags add up to any action. This is going to get painful - these things have always been possible, but have been expensive and hard to scale.  As it becomes truly ubiquitous, there will be no trustworthy communication channels.

For every person saying "religion gave me a hangup about sex" there will be another who says "religion led to me marrying younger" or "religion led me to have more kids in marriage". The right question is whether religion leads to more anti-reproduction attitude on average, but I can't see how that can be true when religious people have higher fertility.

3Benquo
This doesn’t seem to engage with the content of the post at all, or with my multiple corrections to your implausible misunderstandings, so I think this is a motivated pattern of misunderstanding and I’m done with your comments on this post.
cousin_it237

I've held this view for years and am even more pessimistic than you :-/

In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out.

Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.

Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant

It seems really hard to think of any examples of such tech.

5tangerine
Agreed. The rich and powerful could pick off more and more economically irrelevant classes while promising the remaining ones the same won't happen to them, until eventually they can get everything they need from AI and live in enclaves protected by vast drone armies. Pretty bleak, but seems like the default scenario given the current incentives. I think you would effectively have to build extensions to people's neocortexes in such a way that those extensions cannot ever function on their own. Building AI agents is clearly not that.
cousin_it5-2

But many do maintain an explicit approval hierarchy that ranks celibacy and sexual restraint above typical sexual behavior

I think we just disagree here. The Bible doesn't say married people shouldn't have sex, and no prominent Christians say that either. There are norms against nonmarital sex, and there are norms against priests having sex, but between these things you draw a connection and generalization to all people which doesn't sound right to me.

7Benquo
OK, so we've got something like a factual disagreement. Here are some observations that would change my mind substantially: Credible testimony from someone who'd previously been documented claiming that their variant of Christianity had inculcated in them an anti-sex attitude, that they'd been lying to normalize their non-culturally-conditioned aversion to sex. An exposé demonstrating that many such prominently documented testimonies were fake and did not correspond to actual people making those claims. Examples of the sort of thing I mean: * I Took a Christian Virginity Pledge As a Child And It Nearly Destroyed My Life * Growing Up Evangelical Ruined Sex for Me * Overcoming Religious Sexual Shame    I try to find the Christian bible passages saying it's better never to marry or have sex (e.g. Matthew 19:9-12, 1 Corinthians 7), and persistently fail to find them. Or someone persuasively explains that I'm idiosyncratically misinterpreting them, and I can't find evidence of many people agreeing with me (e.g. those verses showing up when I do a Google search for "bible passages saying it's better never to marry or have sex"). A methodologically careful cross-cultural survey demonstrates that this sort of well-attested sex-aversion isn't more common in people raised in high-commitment Christian communities, than in people in other cultures with no such messages. What would change your mind?

There is text in the bible that strongly suggests the new testament set up celibacy as morally superior to sex within marriage. In practice, this mostly only one-shotted autists who got "yay bible" from their social group, and read the bible literally, but didn't read enough of the bible to realize that it is a self-contradicting mess.

You can "un self contradict" the bible, maybe, with enough scholarship such that people who learn the right interpretative schemes can learn about how maybe Paul's stuff shouldn't be taken as seriously as the red text, and ha... (read more)

Yeah, I missed a big part of your point on that. But another part maybe I didn’t? Your post started out talking about norms against nonmarital sex. Then you jump from that to saying they’re norms against reproduction - which doesn't sound right, religious people reproduce fine. And then you say (unless I'm missing something) that they're based on hypocrisy, enabling other people to not follow these norms, which also doesn't sound right.

6Benquo
Successful religions don't suppress reproduction in practice. But many do maintain an explicit approval hierarchy that ranks celibacy and sexual restraint above typical sexual behavior, sometimes expressing overt disgust with sexuality. This creates a gradient of social rewards that aids group cohesion, but requires most people to be "imperfect" by design. An important failure mode is that some conscientious people try to fully internalize the explicit values, ending up with clinical symptoms of sexual aversion that persist even when officially sanctioned (e.g. in marriage).
cousin_it2-3

I think this is wrong. First you say that celibacy would be pushed on lower status people like peasants, then you say it would be pushed on higher status people like warriors. But actually neither happens: it's not to the group's advantage (try to explain how making peasants or warriors celibate would advantage the group - you can't), and we don't find major religions doing it either, they are pro-fertility for almost all people. Celibacy of priests is an exception, but it's small and your explanations don't work for it either.

Benquo10-1

I don't think I made those claims. I did say that clerics are often supposed to be celibate, and warriors are generally supposed to move towards danger, in a single sentence, so I see how those claims might have been confused.

The general pattern I'm pointing out is that some scarce resources, or the approval which is a social proxy for such resources, are allocated preferentially to people who adopt an otherwise perverse preference. These systems are only sustainable with large amounts of hypocrisy, where people are on the whole "bad" rather than "good" ac... (read more)

1[comment deleted]

I think they meant that when people are afraid to lose their jobs, they spend less, leading to less demand for other people's work.

Load More