How can generative AI help doctors? Understand the possibilities

Artificial intelligence (AI) has promoted major advances in healthcare in several areas. From diagnostic accuracy to supporting personalized treatments, to automating administrative tasks and empowering patients in decision-making, AI has been rapidly integrating into industry workflows.

These advances not only ease the burden on healthcare professionals, but also expand access to innovative technologies, benefiting both specialists and the general public.

The term AI encompasses several technologies, including machine learning (ML), which uses algorithms and statistical models to learn and adapt based on data. Deep learning, a subfield of ML, refines pre-trained models to perform complex tasks such as image and speech recognition.

Additionally, generative AI creates new content through generative models such as text production tools. Large language models (LLMs), such as ChatGPT, represent a significant milestone, generating novel texts similar to those produced by humans from large volumes of data.

Machine learning can be described as a mathematical approach to pattern recognition, like regression, that improves its accuracy as more data is processed. Common techniques include decision trees, gradient boosting and neural networks. The latter form the basis of deep learning, which uses multiple layers of pattern recognition to analyze complex data such as medical images.

A milestone in deep learning was the development of convolutional neural networks, between 2010 and 2012, allowing detailed image classification. This technology has driven advances such as the analysis of chest x-rays to predict cardiovascular risks, expanding the use of routine exams for new applications.

Generative AI in medical practice

Generative AI creates new content, such as images or text, through iterative refinement based on specific commands. Although promising, it has limitations, including the possibility of generating inaccurate or fictitious results, known as “hallucinations”.

In healthcare, generative AI is already proving useful in several areas:

  • Radiology: analysis of exams such as x-rays, ultrasound, computed tomography (CT) and magnetic resonance imaging (MRI) to detect abnormalities.
  • Pathology: identification of patterns in histopathological slides for diagnoses, such as cancer.
  • Early Detection: evaluating clinical data to identify early signs of diseases such as diabetes, cardiovascular disease, respiratory failure, and cancer.
  • Records: automation of medical records, with generation of clinical notes and summaries.
  • Workflow Automation: reduction of administrative tasks, such as scheduling, billing and documentation.
  • Repetitive Tasks: optimization of healthcare professionals’ time, allowing them to focus on direct patient care.
  • Discovery of New Drugs: identification of biological targets (such as proteins or genes) and creation of new molecules.
  • Clinical Research: planning and monitoring clinical studies, data analysis and automation of related administrative tasks.

Other applications include robotic surgery, wearable monitoring, mental health, telemedicine, medical education and fighting epidemics.

In hematology, AI helps analyze digital slides prepared from blood or bone marrow collections, identifying normal cells and abnormalities, such as leukemic cells or changes in red blood cells. These tools provide clinical decision support by suggesting possible diagnoses or highlighting areas of interest on slides for review by the hematologist. Furthermore, AI-based systems classify chromosomes in cytogenetics and identify patterns in flow cytometry, contributing to the classification of disease subtypes, such as leukemias and lymphomas, with greater speed and accuracy.

While promising, implementing AI requires rigor to ensure accuracy and reliability. The possibility of “hallucinations” worries professionals and institutions, making human supervision essential, in addition to regulation and control of these technologies.

Building patient trust in AI

Patients express concerns about the use of AI, such as diagnostic errors, data privacy and reduced human interaction. To overcome these barriers, it is crucial to reinforce that AI is a supporting tool, not a replacement, in the clinical process. Practical examples, such as systems that check drug interactions or speed up emergency care, demonstrate their direct benefits.

Ensuring that healthcare professionals remain at the center of clinical decisions is vital to preserving the human relationship in medicine and reassuring patients. Furthermore, rigorous monitoring of the development of AI models – from data collection to application – is critical to ensure they are ethical, representative and safe. While AI can revolutionize diagnoses and treatments, it faces challenges such as privacy, algorithmic bias and lack of transparency.

Conclusion

AI represents a transformative opportunity in healthcare, but its integration must be guided by transparency, ethics and patient focus. As a support tool, it expands the innovative potential of healthcare professionals, preserving the empathy and human care that are essential to medical practice.

*Text written by hematologist Phillip Scheinberg (CRM – 87.226), head of Hematology at BP – A Beneficência Portuguesa in São Paulo and member of the Brazil Health

Blood test uses AI to predict Parkinson’s

This content was originally published on How can generative AI help doctors? Understand the possibilities on the CNN Brasil website.

Source: CNN Brasil

You may also like

US
Markets
Joshua

US

It is reported that the United States (USA) has frozen export controls on key technology to China in the hope