LESSWRONG
LW

843
Davidmanheim
5452Ω1237712301
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
7Davidmanheim's Shortform
Ω
9mo
Ω
18
Modeling Transformative AI Risk (MTAIR)
IABIED: Paradigm Confusion and Overconfidence
Davidmanheim42m20

IABIED likens our situations to alchemists who are failing due to not having invented nuclear physics. What I see in AI safety efforts doesn't look like the consistent failures of alchemy. It looks much more like the problems faced by people who try to create an army that won't stage a coup. There are plenty of tests that yield moderately promising evidence that the soldiers will usually obey civilian authorities. There's no big mystery about why soldiers might sometimes disobey. The major problem is verifying how well the training generalizes out of distribution.


This argument seems like it is begging the question.

Yes, as long as we can solve the general problem of controlling intelligences, in the form of getting soldiers not to disobey what we mean by not staging a coup - which would necessarily include knowing when to disobey illegal orders, and listening to the courts instead of the commander in chief when appropriate - we can solve AI safety, by getting AI to be aligned in the same way. But that just means we have solved alignment in the general case, doesn't it?

Reply
IABIED: Paradigm Confusion and Overconfidence
Davidmanheim1h20

I see strong hints, from how AI has been developing over the past couple of years, that there's plenty of room for increasing the predictive abilities of AI, without needing much increase in the AI's steering abilities.


What are these hints? Because I don't understand how this would happen. All that we need to add steering to predictive general models is to add an agent framework, e.g. a "predict what will make X happen best, then do that thing" - and the failures we see today in agent frameworks are predictive failures, not steering failures.

Unless the contention is that the AI systems will be great at predicting everything except how humans will react and how to get them to do what the AI wants, which very much doesn't seem like the path we're on. Or if the idea is to build narrow AI to predict specific domains, not general AI? (Which would be conceding the entire point IABIED is arguing.)

Reply
Messy on Purpose: Part 2 of A Conservative Vision for the Future
Davidmanheim4h20

Yes, I'm also very unsatisfied with most answers - though that includes my own.

My view of consciousness is that it's not obvious what causes it, it's not obvious we can know if LLM-based systems have it, and it's unclear that it arises naturally in the majority of possible superintelligences. But even if I'm wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I'm not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space - and the longtermists seem to have bitten a few too many bullets on this front for my taste.

 

Reply
The Counterfactual Quiet AGI Timeline
Davidmanheim3d40

This is what I'm talking about when I say people don't take counterfactuals seriously - they seem to assume nothing could really be different, technology is predetermined.  I didn't even suggest that without scaling early, NLP would have hit an AI winter. For example, if today's MS and FB had led the AI revolution, with the goals and incentives they had, you really think LLMs would have been their focus? 

We can also see what happens to other accessible technologies when there isn't excitement and market pressure. For example, solar power was abandoned for a couple decades in the 1970s and 1980s. Nuclear was as well.

And even without presuming focus stays away from LLMs much longer, in fact, in our world, we see the tremendous difference between firms that started safety-pilled, and those which did not. So I think you're ignoring how much founder effects matter, and you're assuming technologists would by default pay attention to risk, or would embrace conceptual models that relied on a decade of theory and debate which by assumption wouldn't have existed.

Reply
The Counterfactual Quiet AGI Timeline
Davidmanheim3d50

Of course, any counterfactual has tons of different assumptions.

  1. Yes, AI rebellion was a sci-fi trope, and much like human uploads or humans terraforming mars, it would have stayed that way without decades of discussion about the dynamics.
  2. The timeline explicitly starts before 2017, and RNN-based chatbots like Replica started out don't scale well, as they realized, and they replaced it with a model based on GPT-2 pretty early on. But sure, there's another world where personal chatbots have enough work done to replace safety-focused AI research. Do you think it turns out better, or are you just positing another point where histories could have diverged?
  3. Yes, but engineering challenges get solved without philosophical justification all of the time. And this is a key point being made by the entire counterfactual - it's only because people took AGI seriously in designing LLMs that they frame the issues as alignment. To respond in more depth to the specific points:

    In your posited case, CoT would certainly have been deployed as a clever trick that scales - but this doesn't mean the models they think of as stochastic parrots start being treated as proto-AGIs with goals. They aren't looking for true generalization, so any mistakes which need to be patched look like increased error rates to patch empirically, or places where they need a few more unit tests and ways to catch misbehavior - not a reason to design for safety for increasingly powerful models!

    And before you dismiss this as implausible blindness, there are smart people who argue this way even today, despite being exposed to the arguments about increasing generality for years. So it's certainly not obvious that they'd take people seriously when they claim that this ELIZA v12.0 released in 2025 is truly reasoning.

Reply
Contra Collier on IABIED
Davidmanheim18d63

I seems like you're arguing against something different than the point you brought up. You're saying that slow growth on multiple systems means we can get one of them right, by course correcting. But that's a really different argument - and unless there's effectively no alignment tax, it seems wrong. That is, the systems that are aligned would need to outcompete the others after they are smarter than each individual human, and beyond our ability to meaningfully correct. (Or we'd need to have enough oversight to notice much earlier - which is not going to happen.)

Reply
Contra Collier on IABIED
Davidmanheim18d235

But the claim isn't, or shouldn't be, that this would be a short term reduction, it's that it cuts off the primary mechanism for growth that supports a large part of the economy's valuation - leading to not just a loss in value for the things directly dependent on AI, but also slowing growth generally. And reduction in growth is what makes the world continue to suck, so that most of humanity can't live first-world lives. Which means that slowing growth globally by a couple percentage points is a very high price to pay.

I think that it's plausibly worth it - we can agree that there's a huge amount of value enabled by autonomous but untrustworthy AI systems that are likely to exist if we let AI continue to grow, and that Sam was right originally that there would be some great [i.e. incredibly profitable] companies before we all die. And despite that, we shouldn't build it - as the title says.

Reply
Contra Collier on IABIED
Davidmanheim18d60

But the way you are reading it seems to mean her "strawmann[ed]" point is irrelevant to the claim she made! That is, if we can get 50% of the way to aligned for current models, and we keep doing research and finding partial solutions at each stage getting 50% of the way to aligned for future models, and at each stage those solutions are both insufficient for full alignment, and don't solve the next set of problems, we still fail. Specifically, not only do we fail, we fail in a way that means "we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems." Which is the think she then disagrees with in the remainder of that paragraph you're trying to defends.

Reply
Contra Collier on IABIED
Davidmanheim18d120

I think the primary reason why the foom hypothesis seems load-bearing for AI doom is that without a rapid AI and local takeoff, we won't simply get "only one chance to correctly align the first AGI". 


As the review makes very clear, the argument isn't about AGI, it's about ASI. And yes, they argue that you would in fact only get one chance to align the system that takes over. As the review discusses at length:

I do think we benefit from having a long, slow period of adaptation and exposure to not-yet-extremely-dangerous AI. As long as we aren’t lulled into a false sense of security, it seems very plausible that insights from studying these systems will help improve our skill at alignment. I think ideally this would mean going extremely slowly and carefully, but various readers may be less cautious/paranoid/afraid than me, and think that it’s worth some risk of killing every child on Earth (and everyone else) to get progress faster or to avoid the costs of getting everyone to go slow. But regardless of how fast things proceed, I think it’s clearly good to study what we have access to (as long as that studying doesn’t also make things faster or make people falsely confident).

But none of this involves having “more than one shot at the goal” and it definitely doesn’t imply the goal will be easy to hit. It means we’ll have some opportunity to learn from failures on related goals that are likely easier.

The “It” in “If Anyone Builds It” is a misaligned superintelligence capable of taking over the world. If you miss the goal and accidentally build “it” instead of an aligned superintelligence, it will take over the world. If you build a weaker AGI that tries to take over the world and fails, that might give you some useful information, but it does not mean that you now have real experience working with AIs that are strong enough to take over the world.

Reply
Resources on quantifiably forecasting future progress or reviewing past progress in AI safety?
Answer by DavidmanheimSep 14, 202530

We worked on parts of this several years ago, and I will agree it's deeply uncertain and difficult to quantify. I'm also unsure that this direction will be fruitful for an individual getting started.

Here are two very different relevant projects I was involved with:

https://arxiv.org/abs/2206.09360

https://arxiv.org/abs/2008.01848

Reply
Load More
Garden Onboarding
4 years ago
(+28)
14Messy on Purpose: Part 2 of A Conservative Vision for the Future
1d
2
64The Counterfactual Quiet AGI Timeline
3d
5
25A Conservative Vision For AI Alignment
2mo
34
22Semiotic Grounding as a Precondition for Safe and Cooperative AI
2mo
0
41No, We're Not Getting Meaningful Oversight of AI
3mo
4
20The Fragility of Naive Dynamism
5mo
1
15Therapist in the Weights: Risks of Hyper-Introspection in Future AI Systems
5mo
1
9Grounded Ghosts in the Machine - Friston Blankets, Mirror Neurons, and the Quest for Cooperative AI
6mo
0
7Davidmanheim's Shortform
Ω
9mo
Ω
18
11Exploring Cooperation: The Path to Utopia
9mo
0
Load More