“AI, AI, Oh!”
Column Editor: Brian Carter, MD | Neonatal/Perinatal Medicine | Interim Director, Pediatric Bioethics | Professor of Pediatrics, UMKC School of Medicine
The introduction of artificial, or augmented, intelligence (AI) into the practice of pediatrics should come as no surprise to clinicians caring for children in the 21st century. After all, children are now growing up in families where their parents have welcomed digital devices – cell phones, tablets and wearable products such as the Apple Watch or Fitbit – as part of the everyday use of technology. These devices capture and share data that may be an accepted norm for some parents but come as a surprise to others. How and where do ethics interface with the development and use of AI as it pertains to children?
First and foremost, the potential for AI to bring improved health care access, utility and disease-specific outcomes should be recognized. Proof of this concept can be seen in at least three such cases. In a study published last year in JAMA Pediatrics, AI was used by experts in child development at Duke University.1 They tracked the visual gaze preference and focus in over 900 toddlers – identifying the typical scanning of the entire screen and focus on a woman’s face among the majority, but a narrowed focus on only the side of the screen with a toy among the 40 toddlers who were later diagnosed with autism spectrum disorder (ASD). Such methods could help diagnose ASD earlier and improve access to appropriate care.
In addition, the utility of AI in both diagnosis and streamlining care in emergency departments (EDs) has demonstrated increased efficiency in diagnosing COVID-19 and, in other studies, the preferred choices of clinical diagnostic tests – reducing both the costs of ED care and length of time spent in the ED. Also, the way asthma is managed – both in an ED and even at home, using wearable device technology – has been a recent focus of attention, as has the real-time monitoring and management of insulin-dependent diabetes mellitus.
With these and other potential benefits, should we be concerned about the role of AI in pediatric health? Let’s consider another three events that may be problematic ethically. First, we must realize the potential misuse of data that represents a privacy breach – wherein data that is unnecessary for disease evaluation and management is incorporated into AI formulations and protocols. Personal health information (PHI) must be minimized and protected in any such situations. Think about how ZIP code data might inform disease severity, social determinants of health, or reasonable expectations of adherence to prescribed therapies or disease management. But think also of how such data, perhaps when coupled with adverse childhood events (ACEs) and the use of child welfare referrals, might be misused by law enforcement. We must remember that data breaches do occur, and when children might be the victims of such breaches, we should be especially cautious.
Second, the potential for bias may exist when AI algorithms are predicated only for certain populations – thereby not necessarily applicable to the broad population of children. Think here of both inadequate information input resulting in imperfect diagnosis and management protocols (output), and of the potential to perpetuate racial and ethnic disparities in health.
And finally, both parental authority and the autonomy of their children should not be ignored or usurped by big data digital health technology companies and their partnerships with Big Pharma. Wearable technologies can include microphones to detect cough characteristics, but also voice biomarkers; and companies could sell such data to insurers and health plans – all for the sake of profit.
So, as we move forward with AI in pediatrics, let’s keep an eye on its ethical ramifications.
- Chang Z, Di Martino JM, Aiello R, et al. Computational methods to measure patterns of gaze in toddlers with autism spectrum disorder. JAMA Pediatr. 2021;175(8):827-836. https://doi.org/10.1001/jamapediatrics.2021.0530