CISA Interim Chief’s ChatGPT File Upload Sparks Security Review and Raises AI Governance Concerns

CISA Interim Chief’s ChatGPT File Upload Sparks Security Review and Raises AI Governance Concerns
The acting director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has come under scrutiny after uploading sensitive government contracting documents to a public version of ChatGPT, triggering automated internal security alerts and a Department of Homeland Security review into potential data handling risks.
The incident occurred during the summer of 2025 when Madhu Gottumukkala, who became CISA’s acting director in May, received special permission from the agency’s Chief Information Officer to use the commercial AI tool shortly after joining the agency. At the time, public AI platforms like ChatGPT were blocked for most Department of Homeland Security (DHS) employees due to security concerns.
The documents in question were labeled “For Official Use Only” (FOUO), a designation indicating they are unclassified but sensitive and not intended for public disclosure. While the materials did not contain classified information, their unauthorized transmission to a public AI service raised immediate flags among CISA’s monitoring systems, designed to prevent accidental or malicious data exposures.
Automated Security Alerts Trigger Internal Review
According to officials familiar with the matter, CISA’s internal security sensors generated multiple warnings as soon as the file uploads occurred. These alerts activated built-in data loss prevention (DLP) mechanisms that prompted senior DHS leadership to assess the potential impact on federal cybersecurity posture and compliance with internal data handling policies.
The Department of Homeland Security, which oversees CISA, launched an internal review to determine whether the incident compromised national security or violated protocols. While no malicious intent is suspected, the review is examining whether proper safeguards were followed and what risks might stem from using public AI tools in government workflows.
Public AI Tools and Government Data: A Risk Discussion
The controversy highlights a broader issue in cybersecurity and data governance: how public AI systems handle sensitive inputs. Commercial AI tools such as ChatGPT typically process user inputs through external servers managed by private companies, raising questions about data retention, possible model training reuse, and unintended exposure beyond the original session.
In contrast, government agencies often use internally controlled AI instances — like CISA’s approved DHSChat — which ensure that sensitive data stays within secure federal networks and does not leave the agency’s infrastructure.
Security experts say the incident underscores the critical importance of well-defined AI governance policies in high-risk environments. They point to “shadow AI” — the use of unsanctioned public AI tools for work purposes — as a growing compliance risk that can lead to inadvertent data leaks and regulatory complications, even without malicious intent.
Leadership Under Scrutiny
The incident also raises questions about the leadership and operational processes within CISA. Gottumukkala’s request for special access to a blocked public AI tool — at a time when most DHS personnel did not have access — and the subsequent alerts have drawn critical attention to how emerging technologies are adopted and controlled in federal cybersecurity agencies.
While the full outcome of the internal review is not yet public, the episode has sparked debate about balancing innovation with stringent data protection practices, particularly as government agencies increasingly explore AI for efficiency and analytical purposes.
What This Means for Cybersecurity Teams
- AI governance must be formalized with strict data categorization and handling protocols.
- Public AI models should be avoided for processing sensitive or official materials unless explicitly sanctioned.
- Organizations should strengthen shadow AI detection and training to reduce compliance risks.

