Kenoubi

Wikitag Contributions

Comments

Sorted by
Kenoubi20

I am saying you do not literally have to be a cog in the machine. You have other options. The other options may sometimes be very unappealing; I don't mean to sugarcoat them.

Organizations have choices of how they relate to line employees. They can try to explain why things are done a certain way, or not. They can punish line employees for "violating policy" irrespective of why they acted that way or the consequences for the org, or not.

Organizations can change these choices (at the margin), and organizations can rise and fall because of these choices. This is, of course, very slow, and from an individual's perspective maybe rarely relevant, but it is real.

I am not saying it's reasonable for line employees to be making detailed evaluations of the total impact of particular policies. I'm saying that sometimes, line employees can see a policy-caused disaster brewing right in front of their faces. And they can prevent it by violating policy. And they should! It's good to do that! Don't throw the squirrels in the shredder!

I don't think my view is affluent, specifically, but it does come from a place where one has at least some slack, and works better in that case. As do most other things, IMO.

(I think what you say is probably an important part of how we end up with the dynamics we do at the line employee level. That wasn't what I was trying to talk about, and I don't think it changes my conclusions, but maybe I'm wrong; do you think it does?)

Kenoubi40

I have trouble understanding what's going on in people's heads when they choose to follow policy when that's visibly going to lead to horrific consequences that no one wants. Who would punish them for failing to comply with the policy in such cases? Or do people think of "violating policy" as somehow bad in itself, irrespective of consequences?

Of course, those are only a small minority of relevant cases. Often distrust of individual discretion is explicitly on the mind of those setting policies. So, rather than just publishing a policy, they may choose to give someone the job of enforcing it, and evaluate that person by policy compliance levels (whether or not complying made sense in any particular case); or they may try to make the policy self-enforcing (e.g., put things behind a locked door and tightly control who has the key).

And usually the consequences look nowhere close to horrific. "Inconvenient" is probably the right word, most of the time. Although very policy-driven organizations seem to have a way of building miserable experiences out of parts any one of which might be best described as inconvenient.

I'm not sure I agree who's good and who's bad in the gate attendant scenario. Surely getting angry at the gate attendant is unlikely to accomplish anything, but if (for now; maybe not much longer, unfortunately) organizations need humans to carry out their policies, the humans don't have to do that. They can violate the policy and hope they don't get fired; or they can just quit. The passenger can tell them that. If they're unable to listen to and consider the argument that they don't have to participate in enforcing the policy, I guess at that point they're pretty much NPCs.

I don't know whether we know anything about how to teach this, other than just telling (and showing, if the opportunity arises), or about what works and what doesn't, but I think this is also what I'd consider the most important goal for education to pursue. I definitely intend to tell my kids, as strongly as possible, "You always can and should ignore the rules to do the right thing, no matter what situation you're in, no matter what anyone tells you. You have to know what the right thing is, and that can be very hard, and good rules will help you figure out what the right thing is much better than you could on your own; but ultimately, it's up to you. There is nothing that can force you to do something you know is wrong."

Kenoubi34

I hadn't noticed that there'd be any reason for people to claim Claude 3.7 Sonnet was "misaligned", even though I use it frequently and have seen some versions of the behavior in question. It seems to me like... it's often trying to find the "easy way" to do whatever it's trying to do. When it decides something is "hard", it backs off from that line of attack. It backs off when it decides a line of attack is wrong, too. Actually, I think "hard" might be a kind of wrong in its ontology of reasoning steps.

This is a reasoning strategy that needs to be applied carefully. Sometimes it works; one really should use the easy way rather than the hard way, if the easy way works and is easier. But sometimes the hard part is the core of the problem and one needs to just tackle it. I've been thinking of 3.7's failure to tackle the hard part as a lack of in-practice capabilities, specifically the capability to notice "hey, this time I really do need to do it the hard way to do what the user asked" and just attempt the hard way.

Having read this post, I can see the other side of the coin. 3.7's RL probably heavily incentivizes it to produce an answer / solution / whatever the user wanted done. Or at least something that appears to be what the user wanted, as far as it can tell. Such as (in a fairly extreme case) hard coding to "pass" unit tests.

I wouldn't read too much into deceiving or lying to cover up in this case. That's what practically any human who had chosen to clearly cheat would do in the same situation, at least until confronted. The decision to cheat in the first place is straightforwardly misaligned though. But I still can't help thinking it's downstream of a capabilities failure, and this particular kind of misalignment will naturally disappear once the model is smart enough to just do the thing, instead. (Which is not, of course, to say we won't see other kinds of misalignment, or that those won't be even more problematic.)

Kenoubi10

That's possible, but what does the population distribution of [how much of their time people spend reading books] look like? I bet it hasn't changed nearly as much as overall reading minutes per capita has (even decline in book-reading seems possible, though of course greater leisure and wealth, larger quantity of cheaply and conveniently available books, etc. cut strongly the other way), and I bet the huge pile of written language over here has large effects on the much smaller (but older) pile of written language over there.

(How hard to understand was that sentence? Since that's what this article is about, anyway, and I'm genuinely curious. I could easily have rewritten it into multiple sentences, but that didn't appear to me to improve its comprehensibility.)

Edited to add: on review of the thread, you seem to have already made the same point about book-reading commanding attention because book-readers choose to read books, in fact to take it as ground truth. I'm not so confident in that (I'm not saying it's false, I really don't know), but the version of my argument that makes sense under that hypothesis would crux on books being an insufficiently distinct use of language to not be strongly influenced, either through [author preference and familiarity] or through [author's guesses or beliefs about [reader preference and familiarity]], by other uses of language.

Kenoubi30

I agree that the average reader is probably smarter in a general sense, but they also have FAR more things competing for their attention. Thus the amount of intelligence available for reading and understanding any given sentence, specifically, may be lower in the modern environment.

Kenoubi20

Question marks and exclamation points are dots with an extra bit. Ellipses may be multiple dots, but also indicate an uncertain end to the sentence. (Formal usage distinguishes "..." for ellipses in arbitrary position and "...." for ellipses coming after a full stop, but the latter is rarely seen in any but academic writing, and I would guess even many academics don't notice the difference these days.)

Kenoubi10

I read a bunch of its "thinking" and it gets SO close to solving it after the second message, but it miscounts the number of [] in the text provided for 19. Repeatedly. While quoting it verbatim. (I assume it foolishly "trusts" the first time it counted.) And based on its miscount, thinks that should be the representation for 23, instead. And thus rules out (a theory starting to point towards) the corrects answer.

I think this may at least be evidence that having anything unhelpful in context, even (maybe especially!) if self-generated, can be really harmful to model capabilities. I still think it's pretty interesting.

Kenoubi10

I have very mixed feelings about this comment. It was a good story (just read it, and wouldn't have done so without this comment) but I really don't see what it has to do with this LW post.

Kenoubi10

Possible edge case / future work - what if you optimize for faithfulness and legibility of the chain of thought? The paper tests optimizing for innocent-looking CoT, but if the model is going to hack the test either way, I'd want it to say so! And if we have both an "is actually a hack" detector and a "CoT looks like planning a hack" detector, this seems doable.

Is this an instance of the Most Forbidden Technique? I'm not sure. I definitely wouldn't trust it to align a currently unaligned superintelligence. But it seems like maybe it would let you make an aligned model at a given capability level into a still aligned model with more legible CoT, without too much of a tax, as long as the model doesn't really need illegible CoT to do the task? And if capabilities collapse, that seems like strong evidence that illegible CoT was required for task performance; halt and catch fire, if legible CoT was a necessary part of your safety case.

Kenoubi10

Is it really plausible that human driver inattention just doesn't matter here? Sleepiness, drug use, personal issues, eyes were on something interesting rather than the road, etc. I'd guess something like that is involved in a majority of collisions, and that Just Shouldn't Happen to AI drivers.

Of course AI drivers do plausibly have new failure modes, like maybe the sensors fail sometimes (maybe more often than human eyes just suddenly stop working). But there should be plenty of data about that sort of thing from just testing them a lot.

The only realistic way I can see for AI drivers, that have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up, to be less safe than human drivers is if there's some kind of coordinated failure. Like if they trust data coming from GPS satellites or cell towers, and those start spitting out garbage and throw the AIs off distribution; or a deliberate cyber-attack / sabotage of some kind.

Load More