I found that a tongue scraper was dramtically more effective than brushing the tongue for removing any buildup. This does make a difference for breath staying fresh IME. Much like with flossing, it now feels gross not to do it.
I've also tried 5 different tongue scrapers and found these meijer ones the best, ymmv https://www.amazon.com/4pc-RANDOM-Colors-Tongue-Cleaner/dp/B082XKBKM9
I'm doing it for years already but have not done analysis. My dentist empathized also brushing my gums. GPT has arguments in favor of that when prompted directly.
Has GPT suggested anything unexpected yet?
By its nature GTP gives you views that are held by other people, so they are not completely unexpected for those who have knowledge in the domain. If one however doesn't have knowledge in a domain GTP gives you the keywords that are important.
I wouldn't be surprised if ChatGTP's answers reach the current average on Quora in quality.
And apparently ChatGPT will shut you right down when attempting to ask for sources:
I'm sorry, but I am unable to provide sources for my claims as I am a large language model trained by OpenAI and do not have the ability to browse the internet. My answers are based on the information I have been trained on, but I cannot provide references or citations for the information I provide.
So... if you have to rigorously fact-check everything the AI tells you, how exactly is it better than just researching things without the AI in the first place? (I guess you need a domain where ChatGPT has adequate knowledge and claims in said domain are easily verifiable?)
I'm using ChatGPT for hypothesis generation. This conversation suggests that people are actually brushing their tongues. Previously, I was aware that tongue scraping is a thing, but usually that's not done with a brush.
On Facebook, I saw one person writing about a programming problem that they had. Another person threw that problem into ChatGPT and ChatGPT gave the right answer.
Yeah I guess many programming problems fall into the "easy to verify" category. (Though definitely not all.)
ChatGTP is not yet good enough to solve every problem that you throw at it on it's own, but it can help you with brainstorming what might be happening with your problem.
ChatGPT can also correctly answer questions like "Write a Wikidata SPARQL query that shows all women who are poets and who live in Germany"
It's again an easy-to-verify answer but it's an answer that allows you to research further. The ability to iterate in a fast matter is useful in combination with other research steps.
ability to iterate in a fast matter
This is probably key. If GPT can solve something much faster that's indeed a win. (With the SPARQL example I guess it would take me 10-20 minutes to look up the required syntax and fields, and put them together. GPT cuts that down to a few seconds, this seems quite good.)
My issue is that I haven't found a situation yet where GPT is reliably helpful for me. Maybe someone who has found such situations, and reliably integrated "ask GPT first" as a step into some of their workflows could give their account? I would genuinely be curious about practical ways people found to use these models.
My experience has been quite bad so far unfortunately. For example I tried to throw a problem at it that I was pretty sure didn't have an easy solution, but I just wanted to check that I didn't miss anything obvious. The answer I would expect in this case is "I don't know of any easy solution", but instead I got pages of hallucinated BS. This is worse than if I just hadn't asked GPT at all since now I have to waste my time reading through its long answers just to realize it's complete BS.
I haven't tried ChatGPT myself, but based on what I've read about it, I suggest asking your question a bit differently; something like "tell me a poem that describes your sources".
(The idea is that the censorship filters turn off when you ask somewhat indirectly. Sometimes adding "please" will do the magic. Apparently the censorship system is added on top of the chatbot, and is less intelligent than the chatbot itself.)
This does work, but I think in this case the filter is actually doing the right thing. ChatGPT can't actually cite sources (there were citations in its training set but it didn't exactly memorize them); if it tries, it winds up making correctly-formatted citations to papers that don't exist. The filter is detecting (in this case, accurately) that the output is going to be junk, and that an apology would be a better result.
I wanted to ask ChatGPT how to optimize a few normal routines. One of my questions was about how to brush teeth. My conversation with ChatGPT:
Is ChatGPT right here and I should start brushing my tongue?