Oh… it’s a tech term, and tech Olympics to vault across, not my own.Curious how you would describe the feeling when you are in the uncanny valley. My guess is something like creepy, but that’s an assumption.
Umm, I thought it's a good reminder to be cautious of ChatGPT with medical questions. I've found the AI here pretty reliable as regards to PTSD questions, but a 75% error rate regarding medications is probably a good reminder to treat everything it says with a big, big pinch of salt.
And, honestly, with everything else. I am appalled at the errors I see in news and books. Appalled, not because there are errors, but because we have allowed it to become so much a part of what we do.I thought it's a good reminder to be cautious of ChatGPT with medical questions.
Unfortunately, ChatGPT would only know what data is entered into it. When I complete survey questionnaires, I’ll often noticed that there is no way possible to provide an accurate answer. For example, I recently answered a survey questionnaire about my last meal - was it high fat, high protein or high carbohydrate. Well neither answer would describe it, yet, the survey provided no other choice. So, I guess I’ll have to lie. Data entries just don’t cover the gray areas.
Study: ChatGPT wrong on 75% of medicine usage questions