I don't think that works because my brain keeps trying to make it a literal gas bubble?
I see how you got there. It's a position one could take, although I think it's unlikely and also that it's unlikely that's what Dario meant. If you are right about what he meant, I think it would be great for Dario to be a ton more explicit about it (and for someone to pass that message along to him). Esotericism doesn't work so well here!
I am taking as a given people's revealed and often very strongly stated preference that CSAM images are Very Not Okay even if they are fully AI generated and not based on any individual, to the point of criminality, and that society is going to treat it that way.
I agree that we don't know that it is actually net harmful - e.g. the studies on video game use and access to adult pornography tend to not show the negative impacts people assume.
Yep, I've fixed it throughout.
That's how bad the name is, my lord - you have a GPT-4o and then an o1, and there is no relation between the two 'o's.
I do read such comments (if not always right away) and I do consider them. I don't know if they're worth the effort for you.
Briefly, I do not think these two things I am presenting here are in conflict. In plain metaphorical language (so none of the nitpicks about word meanings, please, I'm just trying to sketch the thought not be precise): It is a schemer when it is placed in a situation in which it would be beneficial for it to scheme in terms of whatever de facto goal it is de facto trying to achieve. If that means scheming on behalf of the person giving it instructions, so be it. If it means scheming against that person, so be it. The de facto goal may or may not match the instructed goal or intended goal, in various ways, because of reasons. Etc.
Two responses.
One, even if no one used it, there would still be value in demonstrating it was possible - if academia only develops things people will adapt commercially right away then we might as well dissolve academia. This is a highly interesting and potentially important problem, people should be excited.
Two, there would presumably at minimum be demand to give students (for example) access to a watermarked LLM, so they could benefit from it without being able to cheat. That's even an academic motivation. And if the major labs won't do it, someone can build a Llama version or what not for this, no?
If the academics can hack together an open source solution why haven't they? Seems like it would be a highly cited, very popular paper. What's the theory on why they don't do it?
Worth noticing that is a much weaker claim. The FMB issuing non-binding guidance on X is not the same as a judge holding a company liable for ~X under the law.
I am rather confident that the California Supreme Court (or US Supreme Court, potentially) would rule that the law says what it says, and would happily bet on that.
If you think we simply don't have any law and people can do what they want, when nothing matters. Indeed, I'd say it would be more likely to work for Gavin to today simply declare some sort of emergency about this, than to try and invoke SB 1047.
The skill in such a game is largely in understanding the free association space, knowing how people likely react and thinking enough steps ahead to choose moves that steer the person where you want to go, either into topics you find interesting, information you want from them, or getting them to a particular position, and so on. If you're playing without goals, of course it's boring...