It seems LLMs are less likely to hallucinate answers if you end each question with 'If you don't know, say "I don't know"'.
They still hallucinate a bit, but less. Given how easy it is I'm surprised openAI and Microsoft don't already do that.
Has its own failure modes. What does it even mean not to know something? It is just yet another category of possible answers.
Still a nice prompt. Also works on humans.
Fun fact I just discovered - Asian elephants are actually more closely related to wooly mammoths than they are to African elephants!
Reserve soldiers in Israel are paid their full salaries by national insurance. If they are also able to work (which is common as the IDF isn't great at efficiently using it's manpower) they can legally work and will get paid by their company on top of whatever they receive from national insurance.
Given how often sensible policies aren't implemented because of their optics, it's worth appreciating those cases where that doesn't happen. The biggest impact of a war on Israel is to the economy, and anything which encourages people to work rather than waste time during a war is a good policy. But it could so easily have been rejected because it implies soldiers are slacking off from their reserve duties.