Does AI have a place in medical school?
It seems unlikely at this point that you have never heard of AI. Somehow, it seems even less likely that you have never used AI (given the number of products we use now that are AI-powered). But I think it is possible that a reasonable person (such as yourself, dear reader) may have never had AI adequately defined.
For our purposes, AI can be defined as “computers and technology to simulate intelligent behavior and critical thinking comparable to a human being.”[1] In short, AI is seeking to make computers capable of doing tasks that currently require human reasoning. Wouldn’t it be nice if you had a personal assistant that could do any task that you could? I think so.
As you can tell, AI has the power to change everything about how we live and work. Tasks that used to take lots of time and effort will be greatly simplified. Machines will be able to help us complete more research and get more work done in the same amount of time. Things that are presently impossible will be made possible because of AI.
Why are people worried about AI?
Now we have had some good feels, it’s time to get a bit more real about AI. First a quick history lesson: have you ever heard of the Luddites?
The Luddites were a group of textile workers in England during the time of the Industrial Revolution. As factories began automating certain aspects of production, and steam power began replacing human power, the ability to mass produce goods sharply increased. This drove down prices and made it easier for people to afford more of what they needed. Sounds good right?
The Luddites disagreed. They realized that they were much more expensive to employ and less efficient at weaving than a machine was. They knew they would all lose their jobs. This of course was upsetting to the Luddites who did their best to prevent any kind of technological advancement from occurring, often in violent ways. In the end, they lost their jobs, but people across England got cheaper clothing. Since then, anyone who resists technological change has been called a Luddite.
You can probably see where I am going with this. Now that machines are capable (or will be soon) of doing many of the intellectual tasks once reserved solely for humans, will there be anything for us left? Will anyone have a job? [2]
To get a little more esoteric, what if AI becomes too powerful and deems that humans are unnecessary? Could this be the end of humanity? [3]
While I think these are important questions that deserve a much deeper dive than I am capable of providing here, I am certainly not a Luddite myself. There have been fears about the impact of technology on humanity for generations, often founded, but I believe we have come out of the other end better for it. AI certainly needs to be deeply respected for the dangers it could pose, [4] but I don’t believe it should be feared (at least in its present form).
While we don’t know if AI will lead to some of these bad outcomes, there is one thing we know for sure. AI is here and it’s not going away any time soon. Your competition (for med school, residency, fellowship, jobs, etc.) will be using it, often quite effectively. It has the potential to make you much better at taking care of your patients. So now is the time to start.
How not to use AI
While there is a lot of upside to AI, there are many ways to use it that do not confer much benefit, if not being outright harmful. Here are a few that come to mind:
Using it to surreptitiously complete schoolwork. I suppose this goes without saying but as a former teacher, I feel I have to voice it. Clearly you are not learning anything if you use AI to write your paper for you and submit it without disclosing its use.
Using AI for research without checking primary sources. While it is helpful for AI to help generate ideas for research or sketch out some of the basics of certain topics, it is not yet reliable enough to be trusted at face value. This is especially true in healthcare, where the consequences of implementing incorrect information can be dire.
Using AI in research without disclosing it. While there may be some efficiencies to using AI in research projects, this always needs to be disclosed. It clearly is not as capable as a human at producing reliably accurate information.
Using AI to more efficiently deforest the Amazon or strip mine natural landscapes. Just… don’t.
How to use AI
This is the fun part. There are many positive ways AI can impact our lives. I think a good general principle is that it is excellent at replacing repetitive, excessively laborious, boring, or unproductive tasks. Here are a few:
Finding flashcards relevant to study material. Of course, I am biased but there is nothing productive about searching through a deck to find what flashcards match your lecture. Saving time doing this strikes me as purely a good thing.
Writing or rewriting code to help data processing in research. It is hard for me to express how much time I have saved having AI write little Python scripts to help me better sort through data on research projects. While you need to know how to do it yourself, it moves much more quickly this way. It is often quite accurate too.
Summarizing long documents. Want to be sure you got the gist of a research paper? Read it yourself and then have AI summarize the key points. This is similar to discussing it with a colleague which I always find helpful.
Using AI to more rapidly discover lifesaving medical treatments. Let’s make this happen.
Closing thoughts
Right now, we have an opportunity to take this technology and make it work for us as healthcare providers, and not against us. I think we are often a little scared of new technology, and that we often feel it is a little outside of our wheelhouse. But I believe that if we take the time to embrace this new development, we and our patients will benefit for generations.
[1] , A., Malik, P., Pathania, M., & Rathaur, V. (2019). Overview of artificial intelligence in medicine. Journal of Family Medicine and Primary Care, 8, 2328 - 2331. URL
[2] Moradi, P., & Levy, K. (2020). The Future of Work in the Age of AI. URL
[3] Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. ArXiv, abs/2206.13353. URL
[4] Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An Overview of Catastrophic AI Risks. ArXiv, abs/2306.12001. URL