LESSWRONG
LW

Lukas_Gloor
3826Ω8137060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
sunwillrise's Shortform
Lukas_Gloor24d64

Oh, thanks! Yeah, I should reverse my vote, then. I got confused by the sentence structure (and commenting before my morning coffee).

Reply
sunwillrise's Shortform
Lukas_Gloor24d*42

I disagree-voted this comment [edit: reversed now because I misread the comment I'm replying to] because the sort of pushback Said typically gives doesn't remind me of "the good old days" (I think that's a separate thing), but I want to flag that, as someone who's had negative reactions to Said's commenting style in the past, I feel like the past two years or so, I noticed several times where I thought he left valuable comments or criticism that felt on point, and I have noticed a lot less (possibly zero) instances of "omg this feels uncharitable and nitpicky/deliberately playing dense." So, for my part at least, I no longer consider myself as having strong opinions on this topic.

(Note that I haven't read the recent threads with Gordon Seidoh Worley, so this shouldn't be interpreted as me taking a side on that.)

Reply
The Value Proposition of Romantic Relationships
Lukas_Gloor1mo70

Thanks, that's helpful context! Yeah, it's worth flagging that I have not read Duncan's post beyond the list.

Duncan's post suggests that different people in the same social context can view exercises from this list either as potentially humiliating comfort-zone-pushing challenges, or as a silly-playful-natural thing to do.

Seems like my reaction proved this part right, at least. I knew some people must find something about it fun, but my model was more like "Some people think comfort/trust zone expansion itself is fun" rather than "Some people with already-wide comfort/trust zones find it fun to do things that other people would only do under the banner of comfort/trust zone expansion." 

(Sometimes the truth can be somewhere in the middle, though? I would imagine that the people who would quite like to do most of the things in the list find it appealing that it's about stuff you "don't normally do," that it's "pushing the envelope" a little?)

That said, I don't feel understood by the (fear of) humiliation theme in your summary of Duncan's post. Sure, that's a thing and I have that as well, but the even bigger reason why I wouldn't be comfortable going through a list of "actions to do in the context of a game that's supposed to be fun" is because that entire concept just doesn't do anything for me? It just seems pointless at best plus there's uncomfortableness from the artificiality of it? 

As I also wrote in my reply to John:

It's hard to pinpoint why exactly I think many people are highly turned off by this stuff, but I'm pretty sure (based on introspection) that it's not just fear of humiliation or not trusting other people in the room. There's something off-putting to me about the performativeness of it. Something like "If the only reason I'm doing it is because I'm following instructions, not because at least one of us actually likes it and the other person happily consents to it, it feels really weird."

(This actually feels somewhat related to why I don't like small talk -- but that probably can't be the full explanation because my model of most rationalists is that they probably don't like small talk.)

Reply1
The Value Proposition of Romantic Relationships
Lukas_Gloor1mo70

I was initially surprised that you think I was generalizing too far -- because that's what I criticized about your quoting of Duncan's list and in my head I was just pointing to myself as an obviously valid counterexample (because I'm a person who exists, and fwiw many but not all of my friends are similar), not claiming that all other people would be similarly turned off. 

But seeing Thane's reply, I think it's fair to say that I'm generalizing too far for using the framing of "comfort zone expansion" for things that some people might legitimately find fun. 

As I'm going to also write in my reply to Thane, I knew some people must find something about things like the ASMR exampe fun, but my model was more like "Some people think comfort/trust zone expansion itself is fun" rather than "Some people with already-wide comfort/trust zones find it fun to do things that other people would only do under the banner of comfort/trust zone expansion." Point taken! 

Still, I feel like the list could be more representative to humanity in general by not using so many examples that only appeal to people who like things like circling, awkward social games, etc.

It's hard to pinpoint why exactly I think many people are highly turned off by this stuff, but I'm pretty sure (based on introspection) that it's not just fear of humiliation or not trusting other people in the room. There's something off-putting to me about the performativeness of it. Something like "If the only reason I'm doing it is because I'm following instructions, not because at least one of us actually likes it and the other person happily consents to it, it feels really weird." 

(This actually feels somewhat related to why I don't like small talk -- but that probably can't be the full explanation because my model of most rationalists is that they probably don't like small talk.) 

Reply
The Value Proposition of Romantic Relationships
Lukas_Gloor1mo50

As this post was coming together, Duncan fortuitously dropped a List of Truths and Dares which is pretty directly designed around willingness to be vulnerable, in exactly the sense we’re interested in here. Here is his list; consider it a definition-by-examples of willingness to be vulnerable: 


I'm pretty sure you're missing something (edit: or rather, you got the thing right but have added some other thing that doesn't belong) because the list in question is about more than just willingness to be vulnerable in the sense that gives value to relationships. (A few examples of the list are fine for definition-by-examples for that purpose, but more than 50% of examples are about something entirely different.) Most of the examples in the list are about comfort zone expansion. Vulnerability in relationships is natural/authentic (which doesn't meant it has to feel easy), while comfort zone expansion exercises are artificial/stilted.

You might reply that the truth-and-dare context of the list means that obviously everything is going to seem a bit artificial, but the thing you were trying to point at is just "vulnerability is about being comfortable being weird with each other." But that defense fails because being comfortable is literally the opposite of pushing your comfort zone.

For illustration, if my wife and I put our faces together and we make silly affectionate noises because somehow we started doing this and we like it and it became a thing we do, that's us being comfortable and a natural expression of playfulness. By contrast, if I were to give people who don't normally feel like doing this the instruction to put their faces together and make silly affectionate noises, probably the last thing they will be is comfortable!

[Edited to add:] From the list, the best examples are the ones that get people to talk about topics they wouldn't normally talk about, because the goal is to say true things that are for some reason difficult to say, which is authentic. By contrast, instructing others to perform actions they wouldn't normally feel like performing (or wouldn't feel like performing in this artificial sort of setting) is not about authenticity.

I'm not saying there's no use to expanding one's comfort zone. Personally, I'd rather spend a day in solitary confinement than whisper in friend's ear for a minute ASMR-syle, but that doesn't mean that my way of being is normatively correct -- I know intellectually that the inner terror of social inhibitions or the intense disdain for performative/fake-feeling social stuff isn't to my advantage in every situation. Still, in the same way, those who've made it a big part of their identity to continuously expand their comfort zones (or maybe see value in helping others come out of their shell) should also keep in mind that not everyone values that sort of thing or needs it in their lives. 

Reply
MichaelDickens's Shortform
Lukas_Gloor3mo53

At a moderate P(doom), say under 25%, from a selfish perspective it makes sense to accelerate AI if it increases the chance that you get to live forever, even if it increases your risk of dying.

If you're not elderly or otherwise at risk of irreversible harms in the near future, then pausing for a decade (say) to reduce the chance of AI ruin by even just a few percentage points still seems good. So the crux is still "can we do better by pausing." (This assumes pauses on the order of 2-20years; the argument changes for longer pauses.) 

Maybe people think the background level of xrisk is higher than it used to be over the last decades because the world situation seems to be deteriorating. But IMO this also increases the selfishness aspect of pushing AI forward because if you're that desperate for a deus ex machina, surely you also have to thihnk that there's a good chance things will get worse when you push technology forward. 

(Lastly, I also want to note that for people who care less about living forever and care more about near-term achievable goals like "enjoy life with loved ones," the selfish thing would be to delay AI indefinitely because rolling the dice for a longer future is then less obvioiusly worth it.)

Reply
OpenAI lost $5 billion in 2024 (and its losses are increasing)
Lukas_Gloor3mo52

Well done finding the direct contradiction. (I also thought the claims seemed fishy but didn't think of checking whether model running costs are bigger than revenue from subscriptions.)

Two other themes in the article that seem in a bit of tension for me:

  • Models have little potential/don't provide much value.
  • People use their subscriptions so much that the company loses money on its subscriptions.

It feels like if people max out use on their subscriptions, then the models are providing some kind of value (promising to keep working on them even if just to make inference cheaper). By contrast, if people don't use them much, you should at least be able to make a profit on existing subscriptions (even if you might be worried about user retention and growth rates). 

All of that said, I also get the impression "OpenAI is struggling." I just think it has more to do with their specific situation rather than with the industry (plus I'm not as confident in this take as the author seems to be). 

Reply
AI #108: Straight Line on a Graph
Lukas_Gloor4mo40

Rob Bensinger: If you’re an AI developer who’s fine with AI wiping out humanity, the thing that should terrify you is AI wiping out AI.

The wrong starting seed for the future can permanently lock in AIs that fill the universe with non-sentient matter, pain, or stagnant repetition.

For those interested in this angle (how AI outcomes without humans could still go a number of ways, and what variables could make them go better/worse), I recently brainstormed here and here some things that might matter.

Reply
Going Nova
Lukas_Gloor4mo118

Parts of how that story was written triggers my sense of "this might have been embellished." (It reminds me of viral reddit stories.) 

I'm curious if there are other accounts where a Nova persona got a user to contact a friend or family member with the intent of getting them to advocate for the AI persona in some way. 

Reply
AI #107: The Misplaced Hype Machine
Lukas_Gloor4mo20

"The best possible life" for me pretty much includes "everyone who I care about is totally happy"?

Okay, I can see it being meant that way. (Even though, if you take this logic further, you could, as an  altruist, make it include everything going well for everyone everywhere.) Still, that's only 50% of the coinflip.

And parents certainly do dangerous risky things to provide better future for their children all the time.

Yeah, that's true. I could even imagine that parents are more likely to flip coins that say "you die for sure but your kids get a 50% chance of the perfect life." (Especially if the kids are at an age where they would be able to take care of themselves even under the bad outcome.) 

Reply
Load More
100We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"
1y
36
49AI alignment researchers may have a comparative advantage in reducing s-risks
2y
1
15Moral Anti-Realism: Introduction & Summary
3y
0
4Moral Anti-Epistemology
10y
36
30Arguments Against Speciesism
12y
476