LESSWRONG
LW

80
Davidmanheim
5526Ω1237712471
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Modeling Transformative AI Risk (MTAIR)
7Davidmanheim's Shortform
Ω
9mo
Ω
18
Which side of the AI safety community are you in?
Davidmanheim7h20

I'm pointing out that the third camp, which you deny really exists, does exist, and as an aside,  is materially different in important ways from the other two camps.

You say you don't think this matters for allocating funding, and you don't care about what others actually believe. I'm just not sure why either point is relevant here.

Reply
Which side of the AI safety community are you in?
Davidmanheim13h20

There's a huge difference between the types of cases, though. A 90% poisonous twinkie is certainly fine to call poisonous[1], but a 90% male groups isn't reasonable to call male. You said "if most people who would say they are in C are not actually working that way and are deceptively presenting as C," that seems far like the latter than the former, because "fake" implies the entire thing is fake[2].

  1. ^

    Though so is a 1% poisonous twinkie; perhaps the example should be a meal that is 90% protein would be a "protein meal" without implying there is no non-protein substance present.

  2. ^

    There is a sense where this isn't true; if 5% of an image of a person is modified, I'd agree that the image is fake - but this is because the claim of fakeness is about the entirety of the image, as a unit. In contrast, if there were 20 people in a composite image, and 12 of them were AI-fakes and 8 were actual people, I wouldn't say the picture is "of fake people," I'd need to say it's a mixture of fake and real people. Which seems like the relevant comparison if, as you said in another comment, you are describing "empirical clusters of people"!

Reply
Which side of the AI safety community are you in?
Davidmanheim15h20

If you said "mostly bullshit" or "almost always disengenious" I wouldn't argue, but would still question whether it's actually a majority of people in group C, which I'm doubtful of, but very unsure about - but saying it is fake would usually mean it is not a real thing anyone believes, rather than meaning that the view is unusual or confused or wrong.

Closely related to: You Don't Exist, Duncan.

Reply
Which side of the AI safety community are you in?
Davidmanheim17h20

I'll point to a similarly pessimistic but divergent view on how to mange the likely bad transition to an AI future that I co-authored recently;

Instead, we argue that we need a solution for preserving humanity and improving the future despite not having an easy solution of allowing gradual disempowerment coupled with single-objective beneficial AI...

The first question, one that is central to some discussions of long-term AI risk, is how can humanity stay in control after creating smarter-than-human AI? 

But given the question, the answer is overdetermined. We don’t stay in control, certainly not indefinitely. If we build smarter than human AI, which is certainly not a good idea right now, at best we must figure out how we are ceding control. If nothing else, power-seeking AI will be a default, and will be disempowering - even if it’s not directly an existential threat. Even if we solve the problem of treachery robustly, and build an infantilizing vision of superintelligent personal assistants, over long enough time scales, it’s implausible that we not only build that race of more intelligent systems, but do not then cede any power. (And if we did, somehow, the implications of keeping systems that are increasingly intelligent in permanent bondage seems at best morally dubious.)

So, if we (implausibly) happen to be in a world of alignment-by-default, or (even more implausibly) find a solution to intent alignment and agree to create a super-nanny for humanity, what world would we want? Perhaps we use this power to collectively evolve past humanity - or perhaps the visions of pushing for transhumanism before ASI to allow someone, some group to stay in control are realized. Either way, what then for the humans?

Reply
Which side of the AI safety community are you in?
Davidmanheim18h20

Why is there so little Rat brainpower devoted to the pragmatics of how AI safety could be advanced within the global and national political contexts?*

 

As someone who was there, I think the portrayal of the 2020-2022 era efforts to influence policy is strawmanned, but I agree that it was the first serious attempt to engage politically by the community - and was an effort which preceded SBF in lots of different ways - so it's tragic (and infuriating) that SBF poisoned the well by backing it and having it collapse. And most of the reason there was relatively little done by the existential risk community on pragmatic political action in 2022-2024 was directly because of that collapse!

Reply
Which side of the AI safety community are you in?
Davidmanheim18h30

Remaining in this frame of "we make our case for [X course of action] so persuasively that the world just follows our advice" does not make for a compelling political theory on any level of analysis.


But there are times when it does work!

Reply
Which side of the AI safety community are you in?
Davidmanheim18h51

...but it's not fake, it's just  confused according to your expectations about the future - and yes, some people may say it dishonestly, but we should still be careful not to deny that people can think things you disagree with, just because they conflict with your map of the territory.

That said, I don't see as much value in dichotomizing the groups as others seem to.

Reply
Which side of the AI safety community are you in?
Davidmanheim18h4-2

As I said below, I think people are ignoring many different approaches compatible with the statement, and so they are confusing the statement with a call for international laws or enforcement (as you said, "attempts to make it as a basis for laws"), which is not mentioned. I suggested some alternatives in that comment:

"We didn't need laws to get the 1975 Alisomar moratorium on recombinant DNA research, or the email anti-abuse (SPF/DKIM/DMARC) voluntary technical standards, or the COSPAR guidelines that were embraced globally for planetary protection in space exploration, or press norms like not naming sexual assault victims - just strong consensus and moral suasion. Perhaps that's not enough here, but it's a discussion that should take place which first requires clear statement about what the overall goals should be."

Reply
Which side of the AI safety community are you in?
Davidmanheim18h5-1

I strongly support the idea that we need consensus building before looking at specific paths forward - especially since the goal is clearly far more widely shared than the agreement about what strategy should be pursued.

For example, contra Dean Bell's unfair strawman, this isn't a back-door to insist on centralized AI development, or even necessarily a position that requires binding international law! We didn't need laws to get the 1975 Alisomar moratorium on recombinant DNA research, or the email anti-abuse (SPF/DKIM/DMARC) voluntary technical standards, or the COSPAR guidelines that were embraced globally for planetary protection in space exploration, or press norms like not naming sexual assault victims - just strong consensus and moral suasion. Perhaps that's not enough here, but it's a discussion that should take place which first requires clear statement about what the overall goals should be.

This is also why I think the point about lab employees, and making safe pathways for them to speak out, is especially critical; current discussions about whistleblower protections don't go far enough, and while group commitments ("if N others from my company") are valuable, private speech on such topics should be even more clearly protected. And one reason for the inability to get consensus of lab employees is because there isn't currently common knowledge within labs about how many of the people think that the goal is the wrong one, and the incentives for the labs to get investment are opposed to those that would allow employees to have options for voice or loyalty, instead of exit - which explains why, in general, only former employees have spoken out.

Reply
OpenAI #15: More on OpenAI’s Paranoid Lawfare Against Advocates of SB 53
Davidmanheim8d40

I wonder if seeking a general protective order banning OpenAI from further Subpoenas of nonprofits without court review is warranted for the case - that seems like a good first step, and an appropriate precedent for the overwhelmingly likely later cases, given OpenAI's behavior.

Reply
Load More
Garden Onboarding
4 years ago
(+28)
2112 Angry Agents, or: A Plan for AI Empathy
10d
4
14Messy on Purpose: Part 2 of A Conservative Vision for the Future
17d
3
66The Counterfactual Quiet AGI Timeline
19d
5
25A Conservative Vision For AI Alignment
2mo
34
22Semiotic Grounding as a Precondition for Safe and Cooperative AI
3mo
0
42No, We're Not Getting Meaningful Oversight of AI
4mo
4
20The Fragility of Naive Dynamism
5mo
1
15Therapist in the Weights: Risks of Hyper-Introspection in Future AI Systems
6mo
1
9Grounded Ghosts in the Machine - Friston Blankets, Mirror Neurons, and the Quest for Cooperative AI
6mo
0
7Davidmanheim's Shortform
Ω
9mo
Ω
18
Load More