Blog Layout

Do Doctors Need a DOS of Their Own Medicine?

Sam Steele

29 November 2024

When lives are at stake, you might expect medical professionals to embrace any tool that could help them make better diagnoses. Yet a surprising new study reveals that doctors' resistance to artificial intelligence may be putting patients at risk.


A doctor in a white coat staring at a machine generated human hologram

Code Blue Screen


A medical study was published last week that found doctors co-diagnosing using AI were more likely to get the diagnosis wrong because they disregarded the machine when it questioned their expertise. 


This resistance to computer input has itself been given the diagnosis of “algorithmic aversion” – a cognitive risk factor most likely to be associated with people who are highly experienced or expert in their field.


The report, Large Language Model Influence on Diagnostic Reasoning (Goh et al., 2024) observed that when diagnosing medical conditions, Chat GPT-4 alone significantly outperformed doctors who were using GPT-4. 


This output goes against the prevailing body of research which states that human + AI is the most effective collaboration, rather than AI alone or human alone.


For example, a 2023 Harvard report established workers using ChatGPT outperformed those who did not by 40% (Dell’Acqua et al., 2023). Similarly, a paper from MIT uncovered a 37% increase in productivity for workers using AI (Noy and Zhang, 2023) and GitHub CEO Thomas Dohmke says that coders using AI in their work are 55% faster (Scheffler, 2023).


"when the AI model said “Hey Man! You might be wrong, these things don’t fit” they disregarded that”

Dr. Adam Rodman


A middle aged man in glasses, unshaved and wearing a check shirt

Calling Dr Beep


Dr. Adam Rodman, one of the authors of the medical report explained in a podcast interview (Hard Fork, 00:31:00) that the doctors did not want to accept that the AI could be better than them at diagnosis, and would second guess what the AI suggested, and end up getting the diagnosis wrong. 


Dr. Rodman explained that there were two factors at work in the outcome of their study, one was a lack of experience working with the AI technology and not being experienced at using detailed prompts to get good responses. But, the other reason was resistance by some doctors to having their medical opinion questioned by an AI chatbot.


“Some people didn’t quite know how to use a Language Model to get the most use out of it, so training is one factor” he said. “Number two though, when you look at the data, people liked it when the AI model said “oh, this is your idea, these are the things I agree with”. But when the AI model said “Hey Man! You might be wrong, these things don’t fit” they disregarded that”.



"the irony is medical breakthroughs have always required openness to unconventional ideas"

A group of doctors standing, arms crossed

A hard pill to swallow




This type of resistance to AI input has been labelled ‘algorithmic aversion’ by Ethan Mollick, Professor of Management at Wharton, who specialises in researching how AI impacts entrepreneurship and innovation.


Mollick also suggests that the majority of us don’t really understand what AI does and we mostly use it like a search engine and get frustrated by the results.


He said if you’re not used to working with a chat bot “AI systems are surprisingly hard to get a handle on, resulting in a failure to benefit from their advice” (Mollick, 2024). 

The irony is that medical breakthroughs have always required creative thinking and openness to unconventional ideas. When Ignaz Semmelweis first proposed that doctors should disinfect their hands with a chlorine solution between patients, he was ridiculed by his colleagues.


Semmelweis' idea was so radically different from the prevailing customs, habits and practices of the day that it was too challenging for his fellow physicians to accept. So antithetical were his basic hygiene ideas to the medical establishment that he even lost his job for suggesting that doctors were spreading bacterial infections by not properly disinfecting their hands (Pittet and Allegranzi, 2018).

"The future of medicine lies not in choosing between human expertise and artificial intelligence, but in learning to leverage both".

Algorithmic aversion appears to be a similar type of establishment thinking, symptomatic of a closed system - the antithesis of the creative mindset required to fully exploit the opportunities AI presents. 


The hard pill to swallow may be that it is the doctors themselves who require a system update if they are going to fully realize the potential of this powerful new technology.


The future of medicine likely lies not in choosing between human expertise and artificial intelligence, but in learning to leverage both.

an apple made up of computer code

An AI a day keeps the errors away!


Artificially intelligent chatbots are a very different tool from the search engines we're used to. Chatbots are more like an incredibly forgetful but endlessly knowledgeable intern. You have to explain very carefully what you want, and understand that the chatbot doesn't necessarily remember any previous conversations and has absolutely no context. 


"Treat AI like an infinitely patient new coworker" suggests Ethan Mollick, "a coworker who forgets everything you tell them each new conversation, one that comes highly recommended, but whose actual abilities are not that clear" (Mollick, 2024).


In order to learn how to work efficiently with these amnesiac colleagues Mollick advises spending around 10 hours simply talking with an AI chatbot to get used to how it works.


So maybe what the doctors need is not so much an apple a day, but a regular dose of  AI chatbot exposure, to help them to get over their aversion to this potentially life-saving tech.


  • Bibliography and Links

    Dell’Acqua, F. et al. (2023) Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Working Paper 24–013. Harvard Business School Technology & Operations Mgt. Unit, p. 20. Available at: https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf.


    Goh, E. et al. (2024) ‘Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial’, JAMA Network Open, 7(10), p. e2440969. Available at: https://doi.org/10.1001/jamanetworkopen.2024.40969.


    Kevin Roose and Casey Newton (no date) ‘Hard Fork’. Available at: https://open.spotify.com/show/44fllCS2FTFr2x2kjP9xeT (Accessed: 25 November 2024).


    Mollick, E. (2024) Getting started with AI: Good enough prompting, One Useful Thing. Available at: https://www.oneusefulthing.org/p/getting-started-with-ai-good-enough (Accessed: 29 November 2024).


    Noy, S. and Zhang, W. (2023) Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. MIT. Available at: https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdf.


    Scheffler, I. (2023) ‘GitHub CEO says Copilot will write 80% of code “sooner than later”’, Freethink, 17 June. Available at: https://www.freethink.com/robots-ai/github-copilot (Accessed: 29 November 2024).



We can help your business to save time and be more innovative


Contact us to find out more


Contact Us

Share by: