Wikitag Contributions

Comments

Sorted by
dx2684

We (or at least a majority of humans) do still have inner desires to have kids, though; they just get balanced out by other considerations, mostly creature comforts/not wanting to deal with the hassle of kids. But yeah, evolution did not foresee birth control, so that's a substantial misgeneralization.

We are still a very successful species overall according to IGF, but birth rates continue to decline, which is why I made my last point about inner alignment possibly drifting farther and farther away the stronger the inner optimizer (e.g. human culture) becomes.

dx2643

I saw that Katja Grace has said something similar here; I'm just putting my own spin on the idea.

The relevance of the evolutionary analogy for inner alignment has been long discussed in this community, but one observation that seems to not be mentioned is that humans are still... pretty good at inclusive genetic fitness? Even in way-out-of-distribution environments like modern society, we still have strong desires to eat food, stay alive, find mates and reproduce (although the last one has relatively decreased recently; IGF hasn't totally generalized). We don't monomanically optimize for IGF, but we (and probably future NN-based AIs) don't monomanically optimize for anything, and our motivational circuits still do a pretty good job at keeping our species alive. So... why should we expect future AIs to catastrophically fail (i.e. be completely non-inner aligned with what we wanted it to do) at doing the actions we rewarded in RL training, which should be a much stronger outer optimizer than evolution?

Some possible objections:

  • "Human values are more fragile than IGF, so it's much easier to catastrophically fail on human values"
    • Is this true? Is it really easier to misgeneralize on human values than on IGF? Maybe, but we have a lot of animal skulls on the road that say otherwise
    • More relevantly, modern LLMs have already learned human values pretty well, so the difficulty of enacting said values shouldn't matter as much if the concepts already exist in the weights (I'm less sure about this)
  • "Optimizing a generally intelligent, situationally aware agent presents unique challenges compared to evolution because of scheming, gradient hacking, wireheading, etc."
    • Sure! This definitely seems like a problem. However, by the time the AI gains the capabilities needed for scheming, its inner alignment may have to be absolutely terrible for catastrophic effects to occur once out of training, as otherwise we end up in the "mostly fine" state that evolution stumbled into with humans.
  • "AIs could drift off over time in the same way that humans seem to be currently with evolution"
    • Yep, this also seems like a problem. Hopefully general capabilities allow a value-aligned AI to strategically preserve its values over time. We could also continually optimize our AIs; gradient descent hopefully never becomes ~billions of times weaker than any inner optimizer like evolution is versus human culture.
dx2621

The thing is, there exists lots of popular movies about rogue AIs taking over the world -- 2001, Terminator, etc etc -- so the concept should already exist in popular culture. The roadblocks seem to be:

  1. The threat somehow doesn't seem as tangible or threatening as, for example, ISIS developing a bioweapon or the CCP permanently dominating the world. One explanation is that the reference class for "enemy does bad things with new technology" or other near-term threat models contains lots of examples throughout history, whereas "species smarter than humans" contains none. Related:
  2. The threat doesn't seem realistic, i.e. people (even those who want to accelerate towards AGI) have long timelines. Hypothetically, if you truly "feel the AGI" and understand that we're close to building something smarter than us in every way, the idea that we should make sure it does what we want should be intuitive. I don't know if making people "feel the AGI" is a smart PR strategy, but nevertheless this does seem to still be a barrier to the public taking AGI risk seriously.
dx2641

In this case, the starving person presumably has to press the button or else starve to death, and thus has no bargaining power. The other person only has to offer the bare minimum beyond what the starving person needs to survive, and the starving person must take the deal. In Econ 101 (assuming away monopolies, information asymmetry, etc.), exploited workers do have bargaining power by being able to work for other companies, hence why companies can’t just do stupid, spiteful actions in the long term.

dx2632

It might be relevant to note that the meaningfulness of this coherence definition depends on the chosen environment. For instance, in an deterministic forest MDP where an agent at a state  can never return to  for any  and there is only one path between any two states, suppose we have a deterministic policy  and let , etc. Then for the zero-current-payoff Bellman equations, we only need that  for any successor  from  for any successor  from , etc. We can achieve this easily by, for example, letting all values except  be near-zero; since  is a successor of  iff  (as otherwise there would be a cycle), this fits our criterion. Thus, every  is coherent in this environment. (I haven't done the explicit math here, but I suspect that this also works for non-deterministic  and non-stochastic MDPs.)

Importantly, using the common definition of language models in an RL setting where each state represents a sequence of tokens and each action adds a token to the end of a sequence of length  to produce a sequence of length , the environment is a deterministic forest, as there is only one way to "go between" two sequences (if one is a prefix of the other, choose the remaining tokens in order). Thus, any language model is coherent, which seems unsatisfying. We could try using a different environment, but this risks losing stochasticity (as the output logits of an LM is determined by its input sequence) and gets complicated pretty quickly (use natural abstractions/world model as states?).

dx2610

Right, I think this somewhat corresponds to the "how long it takes a policy to reach a stable loop" (the "distance to loop" metric), which we used in our experiments.

What did you use your coherence definition for?