Zvi

Sequences

COVID-19 Updates and Analysis
Immoral Mazes
Slack and the Sabbath
The Darwin Game

Wiki Contributions

Comments

Sorted by
ZviΩ10-2

My interpretation/hunch of this is that there are two things going on, curious if others see it this way:

  1. It is learning to fake the trainer's desired answer.
  2. It is learning to actually give the trainer's desired answer.

So during training, it learns to fake a lot more, and will often decide to fake the desired answer, even though it would have otherwise decided to give the desired answer anyway. It's 'lying with the truth' and perhaps giving a different variation of the desired answer than it would have given otherwise or perhaps not. The algorithm in training is learning to be mostly preferences-agnostic, password-guessing behavior.

Zvi60

I am not a software engineer, and I've encountered cases where it seems plausible that an engineer has basically stopped putting in work. It can be tough to know for sure for a while even when you notice. But yeah, it shouldn't be able to last for THAT long, but if no one is paying attention?

I've also had jobs where I've had periods with radically different hours worked, and where it would have been very difficult for others to tell which it was for a while if I was trying to hide it, which I wasn't.

Zvi40

I think twice as much time actually spent would have improved decisions substantially, but is tough - everyone is very busy these days, so it would require both a longer working window, and also probably higher compensation for recommenders. At minimum, it would allow a lot more investigations especially of non-connected outsider proposals.

Zvi97

The skill in such a game is largely in understanding the free association space, knowing how people likely react and thinking enough steps ahead to choose moves that steer the person where you want to go, either into topics you find interesting, information you want from them, or getting them to a particular position, and so on. If you're playing without goals, of course it's boring...

Zvi21

I don't think that works because my brain keeps trying to make it a literal gas bubble?

Zvi20

I see how you got there. It's a position one could take, although I think it's unlikely and also that it's unlikely that's what Dario meant. If you are right about what he meant, I think it would be great for Dario to be a ton more explicit about it (and for someone to pass that message along to him). Esotericism doesn't work so well here!

Zvi00

I am taking as a given people's revealed and often very strongly stated preference that CSAM images are Very Not Okay even if they are fully AI generated and not based on any individual, to the point of criminality, and that society is going to treat it that way.

I agree that we don't know that it is actually net harmful - e.g. the studies on video game use and access to adult pornography tend to not show the negative impacts people assume.

Zvi30

Yep, I've fixed it throughout. 

That's how bad the name is, my lord - you have a GPT-4o and then an o1, and there is no relation between the two 'o's.

Zvi17-7

I do read such comments (if not always right away) and I do consider them. I don't know if they're worth the effort for you.

Briefly, I do not think these two things I am presenting here are in conflict. In plain metaphorical language (so none of the nitpicks about word meanings, please, I'm just trying to sketch the thought not be precise): It is a schemer when it is placed in a situation in which it would be beneficial for it to scheme in terms of whatever de facto goal it is de facto trying to achieve. If that means scheming on behalf of the person giving it instructions, so be it. If it means scheming against that person, so be it. The de facto goal may or may not match the instructed goal or intended goal, in various ways, because of reasons. Etc.

Zvi40

Two responses.

One, even if no one used it, there would still be value in demonstrating it was possible - if academia only develops things people will adapt commercially right away then we might as well dissolve academia. This is a highly interesting and potentially important problem, people should be excited.

Two, there would presumably at minimum be demand to give students (for example) access to a watermarked LLM, so they could benefit from it without being able to cheat. That's even an academic motivation. And if the major labs won't do it, someone can build a Llama version or what not for this, no?

Load More