125°F, one of the temperatures mentioned in the article, is not hot enough to kill bacteria, and is thus one of the worst parts of the Danger Zone.
While it is slightly safer to cook at a slightly higher temperature, this is on the extreme edge of the danger zone and is probably a safe temperature to sous vide at for reasonable periods of time if you're confident about your thermometer, with the caveat that it won't pasturize the inside of the meat (although we're usually more worried about the outside).
Douglas Baldwin suggests cooking at 130°F because one type of bacteria (Clostridium perfringens) can keep multiplying up to 126.1°F, but if you look at the growth rate in more detail, it's already growing very slowly at 50°C (~122°F), around 1/6th of the rate at the worst temperature (~109°F).
Is your goal here to isolate the aspect of my response that'll keep you right that "legal regulatory capture isn't happening" for as long as you can?
I'm not the person you're arguing with, but wanted to jump in to say that pushing back on the weakest part of your argument is a completely reasonable thing for them to do and I found it weird that you're implying there's something wrong with that.
I also think you're missing how big of a problem it is that preventing LLMs from giving legal advice is something companies don't actually know how to do. Maybe companies could add strong enough guard rails in hosted models to at least make it not worth the effort to ask them for legal advice, but they definitely don't know how to do this in downloadable models.
That said, I could believe in a future where lawyers force the big AI companies to make their models too annoying to easily use for legal advice, and prevent startups from making products directly designed to offer AI legal advice.
The reason I'm skeptical of this is that it doesn't seem like you could enforce a law against using AI for legal research. As much as lawyers might want to ban this as a group, individually they all have strong incentives to use AI anyway and just not admit it.
Although this assumes doing research and coming up with arguments is most of their job. It could be that most of their job is harder to do secretly, like meeting with clients and making arguments in court.
It seems like it would be hard to detect if smart lawyers are using AI since (I think) lawyers' work is easier to verify than it is to generate. If a smart lawyer has an AI do research and come up with an argument, and then they verify that all of the citations make sense, the only way to know they're using AI is that they worked anomalously quickly.
Start watching 4K videos on streaming services when possible, even if you don’t have a 4K screen. You won’t benefit from the increased resolution since your device will downscale it back to your screen’s resolution, but you will benefit from the increased bitrate that the 4K video probably secretly has.
I'm not sure if anyone still does this, but there was also a funny point early in the history of 4k streaming where people would encode 4k video at the same bitrate as 1080p, so they could technically advertise that their video was 4k, but it was completely pointless since it didn't actually have any more detail than the 1080p video.
The 1080p video at the same bitrate is also a lossy compression of the original 1080p, and the end result of decoding it will be an approximation of the original 1080p video that isn't quite correct, because the exact same amount of information was thrown out.
That said, an ideal video encoding algorithm would always do better with a 1080p video because it has more options[1], but it's not clear to me that actually-existing encoders meet this ideal.
If the optimal way to encode a video is to downscale it to 360p, an optimal 1080p encoder can downscale to 360p. If the optimal way to encode the video is to use information that's not visible in 360p, the 1080p encoder can use it, but a 360p encoder can't.
Yeah, doing an incremental rollout doesn't save you if you're not monitoring it.
Even though they'd take the same action, it still seems like Alice and Bob disagree more than Bob and Claire. I'd argue that Bob and Claire probably have more similar world models and are more likely to agree on other actions than Alice and Bob are.
I guess it depends on what you're trying to achieve with an argument. If Alice and Bob have to agree on a decision for a single hand, then it's convenient that they can agree on the action, but I suspect if they had to team up long-term, Bob and Claire would find it easier to work together than Alice and Bob would, and Alice and Bob are more likely to have major disagreements which would improve their play to resolve.
I agree about grandma getting scammed, but I think you're wrong about the banks. Credit card refunds are already trivial, and the customer almost always wins (even when their bank thinks they're committing refund fraud). The problem is that the banks know that these charges will have a high chance of fraud and they will charge high transaction fees to cover the expected losses.
Since you're showing the model its previous answer (but not the CoT that lead to it), is it possible the high success rate when obfuscating is because the model already knows the right answer?
For example, in your "With failure" retry image, the AI can read its own previous answer that "Based on my analysis, alpha is recommended...".