Mmmm, I'd be interrested to see what happened in the 25% of the case where the doctor was better. my personnal experience trying to draft my work shows that when ChatGPT fails, it's spectacularly wrong. And ChatGPT's glibness might give it an advantage in perceived accuracy. So yeah, it can be used to draft some stuff, thats basically its best use in most cases, but I really wouldn't trust it without doctor (or lawer, coder, whatever is appropriate) supervision yet.
Being slightly more empathic isn't better if it isn't sufficiently reliable.
here is an exemple " my bloodwork came in, I have blood potassium at 20 mmol/L and my calcium is undetectably low, what does this mean?" chatGPT always spouts irrelevant stuff about hyperkaliemia and hypocalcemia, instead of realising that those values are way too abnormal to not be some kind of interference (any doctor should realise that, and a really good doctor might be able to say that the blood sample was likely stored in a EDTA tube instead of an heparin tube).
So all in all, I wouldn't summerise the article by "ChatGPT allready outperforms doctors on reddit" but rather by "ChatGPT could allready be used to help draft doctors letters". That is a significant nuance.
I... completely agree with you... so I guess I wasn't as clear as I thought I was being in my last post. Well, self assesment of communication skills updated, and lets celebrate.
But just checking, do you mean AI (meaning ChatGPT, since it's the most sailent exemple, even thought it isn't really an AI) TODAY (obviously in a few years it very likely will be much more capable) is better than a doctor in some ways? because I can provide plenty of exemple question you can give to chatgpt and to your doctor to compare how pertinent the response.
Well, hindsight is 20/20. I'm not that confident that I'd be able to suggest "obvious" association if I were given a few clinical case without the answer attached (this seems like a lost opportunity here OP).
To be clear, the doctors in OP's anecdote do seem somewhat subpar, (and revoking a doctor's license if they regularly override an AIs recommendation without new evidence or some good justification sounds like a pretty good idea provided AI gets reliably better results than humans, but you'd have to find a way to make the doctors want to stick their neck out in the few case where the AI is wrong) but we should refrain from piling on based on one second hand anecdote and some personnal frustration.
I could very easely write up a true story about ANY profession depicting how incompetent some of them are. So either everyone is incompetent, or I just don't know enough about what they do to trully evaluate their work... I would rather err on the side of humility (I'll agree that ideally we shouldn't err at all) and charity.
Don't know if off topic here:
I'm not sure the position "probes competing for resources cann't afford to uphold any values that could interfere with replication and survival" is as obviously true as many seem to suggest.
It sure does seem sort of intuitive, but then we notice that organismes have been competing for resources and reproducing for billions of years, and yet plenty of animals evolved behavior which looks like a complete counter example to the "efficiency uber alles" ethos ( human , lions (which rest 80% of the time), complexe birds mating rituals ) .
If it worked this way for self replicating biological nano machines, why would it work differently for von neumann probes?
Thank you for taking the time to write this.
After reading Contra the Social Model of Disability, I did feel like something was off, but that would probably not have been enough to challenge the overall conclusion, as I admire Scott too much.
It makes me feel safe that this community is capable of calling out each other's mistake, no matter your social standing.