You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

New kid on the block: Artificial intelligence just moved into town

Abstract

This article describes the experience of a resident physician on the burnout-prone demands of postgraduate training during rapidly evolving integration of technology including artificial intelligence.

I spent my last holiday at home with my mom in Pennsylvania, and we have a bit of an obsession with Benedict Cumberbatch. I first met him in the form of his portrayal of Sherlock Holmes in the BBC series Sherlock, then watched him in The Hobbit series and as the voice of Doctor Strange in the Marvel Avengers movies. But my favorite, my absolute favorite, is his portrayal of Alan Turing in The Imitation Game. Spoiler alert: he creates a computer capable of analyzing and processing encrypted German messages during World War II so that the Allied forces could decode them and win the war. While Alan’s computer could only process data given to it, humankind has progressed in the eighty-some years since his time and created computers capable of not only processing data, but storing, compiling, and then processing the amalgamation themselves. We now have artificially made machines with the intelligence of a human brain. Artificial intelligence (AI) within our modern-day computers has moved to town, and it’s here to stay.

When I came back from my holiday, I resumed my job as a physiatry resident in a large urban program and I began to compare the rapid changes in computers to the changes in my life during my short four years of residency, as well as the lives of my co-residents. We each remember our graduation from medical school during the initial peak of COVID-19. There was a lot of pride in our accomplishments, some relief about the success, excitement for the next phase in our lives, and fear. Fear because we woke up with no more knowledge on graduation day than we had the night before, but suddenly we were doctors with much more responsibility for patients’ lives. This couldn’t be more profound than during a time of worldwide crisis when even experienced physicians at the height of their careers were learning on the fly about a new disease, let alone brand new interns. As every year of our residency progresses, we gain more responsibility with a similar single night as the defining line.

My program is considered advanced, meaning intern year is performed separately from our main specialty’s program. We transitioned from medicine intern to physiatry PGY2 overnight. We were expected to already have basic medicine knowledge and begin incorporating basic rehabilitation situations with the push of the alarm clock switch. We gained a little bit of autonomy as the year progressed but still felt unsure about a lot of major decisions without our attendings’ help. Then, we suddenly became PGY3 s with another overnight alarm clock push and went from running a simple patient list to being senior residents running a group of juniors who each have their own lists, making admission decisions, and covering lists for post-call juniors all at the same time. Just as we got used to that, we woke up to another alarm clock as PGY4 s and chief residents, in charge of a whole program. We manage both junior and senior residents at sites across the city for both day-to-day happenings and disciplinary actions. Some days it feels like we inherited 36 children we’re attempting to raise while still growing into ourselves as we apply for our desired fellowships. We’re still not quite sure exactly when we grew up along the way.

All of these changes throughout residency are beautiful because they show our independence and decision-making capacity is not only growing, but that our attendings feel comfortable and trust us to exercise those qualities. However, they also have a cost. The more independence we gain, the more efficient and accountable we must also become, and that means more responsibility for documentation. At the beginning, we were given a half hour for follow-up visits and an hour for new patients while we learned, but now it’s been cut down to 20-30 minute slots like in the real world. And on top of it all, we now see every patient the attending sees BEFORE they see them. Turns out we really were given time to learn when we were still the new kids. We have to see more patients every day and be more efficient while doing so because typing out each note and orders in our electronic medical record takes the same amount of time for each patient, more or less. With increased demands on productivity from administration, we spend much less time connecting with the human who came to us for help and more time with the microprocessor chunking through the human’s data points. The more complicated the patient’s condition, the more data points we have to chunk through, which means even more time with the computer and less with our patient. How is that going to work when pediatric patients have truly multidisciplinary teams with so many data points to work through?

The advent of AI has many positives. Programs like ChatGPT can scan the internet and synthesize information to write essays or provide tourism recommendations. Future AI models could synthesize our pediatric multidisciplinary data points to cut down on our computer time and increase face-to-face time with children who rely on human interaction for healthy development. Increased face-to-face time with these kids could improve healthcare workers’ burnout risk. But what about the quality of the care the AI would provide in these circumstances? If we’re reliant on AI’s computational powers for data analysis, can we trust a computer without emotions or feelings to adequately combine objective health data with subjective patient experiences and desires? What is the risk to us as physicians if computers prioritize monetary savings over the emotional and social benefit to our pediatric patients? Are we putting ourselves at risk for moral injury by embracing AI?

AI is here, and it is likely here to stay. We propose that the pediatric community needs to study the outcomes of our patients and our providers as AI becomes more integrated with medical care. Just because AI is the “new kid on the block” doesn’t mean it doesn’t require parental supervision as it grows and matures. If patient care suffers and the burden to our providers worsens, then we should think hard about whether we need to set up some ground rules for how it “behaves” in our community. Modern medicine involves all levels of trainees from students up through attendings in academic institutions, allowing for optimal study of its effects on cohorts in longitudinal fashion. Only studying the effects of AI over time will show us whether the benefits outweigh the potential harms and help us determine how to effectively integrate it into our currently existent pediatric care.

Conflict of interest

The authors have no conflicts to disclose.

Acknowledgments

The authors would like to acknowledge Gwendolyn Osterwald, BA, MA for her linguistic and grammatical contributions.