OpenAI and Anthropic held classified briefings with the House Homeland Security Committee last week, diving into the cybersecurity implications of their latest AI models. The sessions focused on the potential threats these advanced technologies pose to critical infrastructure and national security, particularly in the face of adversarial nations like China.
Key Takeaways
Anthropic has delayed the public release of its 'Mythos Preview' model due to its ability to identify and exploit critical security vulnerabilities. Meanwhile, OpenAI opted for a tiered rollout of its GPT-5.4-Cyber model. Both companies are collaborating with federal agencies to provide access to these tools while mitigating risks.
'Productive partnerships between industry and government are essential to stay ahead of evolving threats and protect American AI development,' said House Homeland Security Chair Andrew Garbarino (R-N.Y.).
The briefings also addressed a recent White House memo accusing China of 'industrial-scale' efforts to replicate American AI models. Garbarino emphasized the need for Congress to identify risks and ensure it's asking the right questions.
Growing Concerns
Committee members expressed alarm over demonstrations of 'jailbroken' AI models, which bypass built-in safety measures. These tools, Rep. August Pfluger (R-TX) noted, could be manipulated for malicious purposes, underscoring the urgent need for regulatory guardrails.
Rep. Andy Ogles (R-TN) added, 'AI is advancing so rapidly, and Congress is light years behind.'
The meetings are part of ongoing efforts by the committee to address the national security implications of generative AI, including its potential use in state-sponsored cyberattacks.