The U.S. Food and Drug Administration’s (FDA) ambitious rollout of its generative AI tool, “Elsa,” has encountered significant operational challenges, raising concerns among staff and experts about the agency’s rapid integration of artificial intelligence into its regulatory processes.
Background and Intended Benefits
Launched on June 2, 2025, nearly a month ahead of its scheduled deployment, Elsa was designed to enhance the efficiency of FDA operations by assisting employees with tasks such as summarizing adverse event reports, reviewing clinical protocols, generating database code, and identifying high-priority inspection targets. Developed within Amazon Web Services’ secure GovCloud environment, Elsa was built to ensure the confidentiality of sensitive government data, explicitly avoiding the use of proprietary information submitted by regulated industries.
FDA Commissioner Dr. Marty Makary lauded the early launch, stating that it was achieved under budget due to the collaboration of in-house experts across the agency’s centers. He emphasized Elsa’s potential to transform FDA operations, citing instances where tasks that previously took days could now be completed in minutes.
Operational Challenges and Staff Concerns
Despite the optimistic projections, FDA staff have reported several issues with Elsa’s functionality. Employees from the Center for Devices and Radiological Health (CDRH) indicated that the AI tool, initially developed as “CDRH-GPT,” suffered from system bugs, lacked internet connectivity, and faced difficulties in processing documents. These problems hindered the tool’s ability to perform even basic tasks, such as uploading documents or allowing users to submit queries.
Moreover, when tested with questions about FDA-approved products or public information, Elsa provided summaries that were either incorrect or only partially accurate. Staff members expressed concerns that the tool’s capabilities were overinflated by officials and that its current form should be limited to administrative tasks, not scientific evaluations.
The rapid deployment of Elsa also raised questions about the adequacy of training and support for FDA employees. Some staff felt that the rollout was rushed and that the agency failed to establish proper guidelines and guardrails for the tool’s use. This sentiment was echoed by experts who cautioned against the hasty integration of AI technologies without thorough validation and oversight.
Agency Response and Future Plans
In response to the reported challenges, FDA Chief AI Officer Jeremy Walsh acknowledged the issues and emphasized the agency’s commitment to refining Elsa’s capabilities. He stated that as the FDA learns how employees are using the tool, the development team would add functionalities and grow with the needs of the staff and the agency.
The FDA has framed Elsa as the initial step in its broader AI strategy, with plans to integrate more AI processes, such as advanced data processing and generative AI functions, to further support the agency’s mission. However, the current setbacks highlight the need for a more measured approach to AI adoption, ensuring that technological advancements do not compromise the quality and reliability of regulatory reviews.
Implications for Regulatory Processes
The challenges faced by Elsa underscore the complexities of integrating AI into critical regulatory functions. While AI has the potential to enhance efficiency and reduce workloads, its deployment must be accompanied by rigorous testing, clear guidelines, and ongoing oversight to prevent errors that could impact public health decisions.
As the FDA continues to navigate the integration of AI tools like Elsa, it must balance innovation with caution, ensuring that technological advancements serve to augment, rather than undermine, the agency’s commitment to safeguarding public health.