Lukas_Gloor

Wiki Contributions

Comments

Sorted by

When the issue is climate change, a prevalent rationalist take goes something like this:

"Climate change would be a top priority if it weren't for technological progress. However, because technological advances will likely help us to either mitigate the harms from climate change or will create much bigger problems on their own, we probably shouldn't prioritize climate change too much." 

We could say the same thing about these trends of demographic aging that you highlight. So, I'm curious why you're drawn to this topic and where the normative motivation in your writing is coming from.

In the post, you use normative language like, "This suggests that we need to lower costs along many fronts of both money and time, and also we need to stop telling people to wait until they meet very high bars." (In the context of addressing people's cited reasons for why they haven't had kids – money, insecurity about money, not being able to affords kids or the house to raise them in, and mental health.) 

The way I conceptualize it, one can zoom in on different, plausibly-normatively-central elements of the situation:

(1) The perspective of existing people.

1a Nation-scale economic issues from an aging demographic, such as collapse of pension schemes, economic stagnation from the aging workforce, etc. 

1b Individual happiness and life satisfaction (e.g., a claim that having children tends to make people happier, also applying to parents 'on the margin,' people who, if we hadn't enouraged them, would have decided against children). 

(2) Some axiological perspective that considers the interests of both existing and newly created people/beings.

It seems uncontroversial that both 1a and 1b are important perspectives, but it's not obvious to me whether 1a is a practical priority for us in light of technological progress (cf the parallel to climate change) or how the empirics of 1b shake out (whether parents 'on the margin' are indeed happier). (I'm not saying 1b is necessarily controversial – for all I know, maybe the science already exists and is pretty clear. I'm just saying: I'm not personally informed on the topic even though I have read your series of posts on fertility.)

And then, (2) seems altogether subjective and controversial in the sense that smart people hold different views on whether it's all-things-considered good to encourage people to have lower standards for bringing new people into existence. Also, there are strong reasons (I've written up a thorough case for this here and here) why we shouldn't expect there to be an objective answer on "how to do axiology?."

This series would IMO benefit from a "Why I care about this?" note, because without it, I get the feeling of "Zvi is criticizing things government do/don't do in a way that might underhandedly bias readers into thinking that the implied normative views on population ethics are unquestioningly correct." The way I see it, governments are probably indeed behaving irrationally here given them not being bought into the prevalent rationalist worldview on imminent technological progress (and that's an okay thing to sneer at), but this doesn't mean that we have to go "boo!" to all things associated with not choosing children, and "yeah!" to all things associated with choosing them.

That said, I still found the specific information in these roundups interesting, since this is clearly a large societal trend and it's interesting to think through causes, implications, etc. 

The tabletop game sounds really cool!

Interesting takeaways.

The first was exactly the above point, and that at some point, ‘I or we decide to trust the AIs and accept that if they are misaligned everyone is utterly f***ed’ is an even stronger attractor than I realized.

Yeah, when you say it like that... I feel like this is gonna be super hard to avoid!

The second was that depending on what assumptions you make about how many worlds are wins if you don’t actively lose, ‘avoid turning wins into losses’ has to be a priority alongside ‘turn your losses into not losses, either by turning them around and winning (ideal!) or realizing you can’t win and halting the game.’

There's also the option of, once you realize that winning is no longer achievable, trying to lose less badly than you could have otherwise. For instance, if out of all the trajectories where humans lose, you can guess that some of them seem more likely to bring about some extra bad dystopian scenario, you can try to prevent at least those. Some examples that I'm thinking of are AIs being spiteful or otherwise anti-social (on top of not caring about humans) or AIs being conflict-prone in AI-vs-AI interactions (including perhaps AIs aligned to alien civilizations). Of course, it may not be possible to form strong opinions over what makes for a better or worse "losing" scenario – if you remain very uncertain, all losing will seem roughly equally not valuable.

The third is that certain assumptions about how the technology progresses had a big impact on how things play out, especially the point at which some abilities (such as superhuman persuasiveness) emerge.

Yeah, but I like the idea of rolling dice for various options that we deem plausible (and having this built into the game). 

I'm curious to read takeaways from more groups if people continue to try this. Also curious on players' thoughts on good group sizes (how many people played at once and whether you would have preferred more or fewer players).

I agree that it sounds somewhat premature to write off Larry Page based on attitudes he had a long time ago, when AGI seemed more abstract and far away, and then not seek/try communication with him again later on. If that were Musk's true and only reason for founding OpenAI, then I agree that this was a communication fuckup.

However, my best guess is that this story about Page was interchangeable with a number of alternative plausible criticisms of his competition on building AGI that Musk would likely have come up with in nearby worlds. People like Musk (and Altman too) tend to have a desire to do the most important thing and the belief that they can do this thing a lot better than anyone else. On that assumption, it's not too surprising that Musk found a reason for having to step in and build AGI himself. In fact, on this view, we should expect to see surprisingly little sincere exploration of "joining someone else's project to improve it" solutions.

I don't think this is necessarily a bad attitude. Sometimes people who think this way are right in the specific situation. It just means that we see the following patterns a lot:

  • Ambitious people start their own thing rather than join some existing thing.
  • Ambitious people have fallouts with each other after starting a project together where the question of "who eventually gets de facto ultimate control" wasn't totally specified from the start. 

(Edited away a last paragraph that used to be here 50mins after posting. Wanted to express something like "Sometimes communication only prolongs the inevitable," but that sounds maybe a bit too negative because even if you're going to fall out eventually, probably good communication can help make it less bad.)

I thought the part you quoted was quite concerning, also in the context of what comes afterwards: 

Hiatus: Sam told Greg and Ilya he needs to step away for 10 days to think. Needs to figure out how much he can trust them and how much he wants to work with them. Said he will come back after that and figure out how much time he wants to spend.

Sure, the email by Sutskever and Brockman gave some nonviolent communication vibes and maybe it isn't "the professional thing" to air one's feelings and perceived mistakes like that, but they seemed genuine in what they wrote and they raised incredibly important concerns that are difficult in nature to bring up. Also, with hindsight especially, it seems like they had valid reasons to be concerned about Altman's power-seeking tendencies!

When someone expresses legitimate-given-the-situation concerns about your alignment and your reaction is to basically gaslight them into thinking they did something wrong for finding it hard to trust you, and then you make it seem like you are the poor victim who needs 10 days off of work to figure out whether you can still trust them, that feels messed up! (It's also a bit hypocritical because the whole "I need 10 days to figure out if I can still trust you for thinking I like being CEO a bit too much," seems childish too.) 

(Of course, these emails are just snapshots and we might be missing things that happened in between via other channels of communication, including in-person talks.)

Also, I find it interesting that they (Sutskever and Brockman) criticized Musk just as much as Altman (if I understood their email correctly), so this should make it easier for Altman to react with grace. I guess given Musk's own annoyed reaction, maybe Altman was calling the others' email childish to side with Musks's dismissive reaction to that same email.

Lastly, this email thread made me wonder what happened between Brockman and Sutskever in the meantime, since it now seems like Brockman no longer holds the same concerns about Altman even though recent events seem to have given a lot of new fire to them.

Some of the points you make don't apply to online poker. But I imagine that the most interesting rationality lessons from poker come from studying other players and exploiting them, rather than memorizing and developing an intuition for the pure game theory of the game. 

  • If you did want to focus on the latter goal, you can play online poker (many players can >12 tables at once) and after every session, run your hand histories through a program (e.g., "GTO Wizard") that will tell you where you made mistakes compared to optimal strategy, and how much they would cost you against an optimal-playing opponent. Then, for any mistake, you can even input the specific spot into the trainer program and practice it with similar hands 4-tabling against the computer, with immediate feedback every time on how you played the spot. 

It seems important to establish whether we are in fact going to be in a race and whether one side isn't already far ahead.

With racing, there's a difference between optimizing the chance of winning vs optimizing the extent to which you beat the other party when you do win. If it's true that China is currently pretty far behind, and if TAI timelines are fairly short so that a lead now is pretty significant, then the best version of "racing" shouldn't be "get to the finish line as fast as possible." Instead, it should be "use your lead to your advantage." So, the lead time should be used to reduce risks.

Not sure this is relevant to your post in particular; I could've made this point also in other discussions about racing. Of course, if a lead is small or non-existent, the considerations will be different.

I wrote a long post last year saying basically that.

Even if attaining a total and forevermore cessation of suffering is substantially more difficult/attainable by substantially fewer people in one lifetime, I don't think it's unreasonable to think that most people could suffer at least 50 percent less with dedicated mindfulness practice. I'm curious as to what might feed an opposing intuition for you! I'd be quite excited about empirical research that investigates the tractability and scalability of meditation for reducing suffering, in either case.

My sense is that existing mindfulness studies don't show the sort of impressive results that we'd expect if this were a great solution.

Also, I think people who would benefit most from having less day-to-day suffering often struggle with having no "free room" available for meditation practice, and that seems like an issue that's hard to overcome even if meditation practice would indeed help them a lot.

It's already sign of having a decently good life when you're able to start dedicating time for something like meditation, which I think requires a bit more mental energy than just watching series or scrolling through the internet. A lot of people have leisure time, but it's a privilege to be mentally well off enough to do purposeful activities during your leisure time. The people who have a lot of this purposeful time probably (usually) aren't among the ones that suffer most (whereas the people who don't have it will struggle sticking to regular meditation practice, for good reasons).

For instance, if someone has a chronic illness with frequent pain and nearly constant fatigue, I can see how it might be good for them to practice meditation for pain management, but higher up on their priority list are probably things like "how do I manage to do daily chores despite low energy levels?" or "how do I not get let go at work?."

Similarly, for other things people may struggle with (addictions, financial worries, anxieties of various sorts; other mental health issues), meditation is often something that would probably help, but it doesn't feel like priority number one for people with problem-ridden, difficult lives. It's pretty hard to keep up motivation for training something that you're not fully convinced of it being your top priority, especially if you're struggling with other things.

I see meditation as similar to things like "eat healthier, exercise more, go to sleep on time and don't consume distracting content or too much light in the late evenings, etc." And these things have great benefits, but they're also hard, so there are no low-hanging fruit and interventions in this space will have limited effectiveness (or at least limited cost-effectiveness; you could probably get quite far if you gifted people their private nutritionist cook, fitness trainer and motivator, house cleaner and personal assistant, meditation coach, give them enough money for financial independence, etc.).

And then the people who would have enough "free room" to meditate may be well off enough to not feel like they need it? In some ways, the suffering of a person who is kind of well off in life isn't that bad and instead of devoting 1h per day for meditation practice to reduce the little suffering that they have, maybe the well-off person would rather take Spanish lessons, or train for a marathon, etc.

(By the way, would it be alright if I ping you privately to set up a meeting? I've been a fan of your writing since becoming familiar with you during my time at CLR and would love a chance to pick your brain about SFE stuff and hear about what you've been up to lately!)

I'll send you a DM!

[...] I am certainly interested to know if anyone is aware of sources that make a careful distinction between suffering and pain in arguing that suffering and its reduction is what we (should) care about.

I did so in my article on Tranquilism, so I broadly share your perspective!

I wouldn't go as far as what you're saying in endnote 9, though. I mean, I see some chance that you're right in the impractical sense of, "If someone gave up literally all they cared about in order to pursue ideal meditation training under ideal circumstances (and during the training they don't get any physical illness issues or otherwise have issues crop up that prevent successfully completion of the training), then they could learn to control their mental states and avoid nearly all future sources of suffering." But that's pretty impractical even if true!

It's interesting, though, what you say about CBT. I agree it makes sense to be accurate about these distinctions, and that it could affect specific interventions (though maybe not at the largest scale of prioritization, the way I see the landscape).

This would be a valid rebuttal if instruction-tuned LLMs were only pretending to be benevolent as part of a long-term strategy to eventually take over the world, and execute a treacherous turn. Do you think present-day LLMs are doing that? (I don't)

Or that they have a sycophancy drive. Or that, next to "wanting to be helpful," they also have a bunch of other drives that will likely win over the "wanting to be helpful" part once the system becomes better at long-term planning and orienting its shards towards consequentialist goals. 

On that latter model, the "wanting to be helpful" is a mask that the system is trained to play better and better, but it isn't the only thing the system wants to do, and it might find that once its gets good at trying on various other masks to see how this will improve its long-term planning, it for some reason prefers a different "mask" to become its locked-in personality. 

Load More