Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Here’s what could happen if CISA 2015 expires next month

18 August 2025 at 06:00

Expiration of a 2015 law at the end of September could dramatically reduce cyber threat information sharing within industry, as well as between companies and the federal government, almost to the point of eliminating it, some experts and industry officials warn.

The Cybersecurity Information Sharing Act, also known as CISA 2015, is due to end next month unless Congress extends it. Leaders of both of the House and Senate panels with the responsibility for reauthorizing it say they intend to act on legislation next month, but the law still stands to expire soon without a quick bicameral deal.

The original 2015 law provided legal safeguards for organizations to share threat data with other organizations and the federal government.

“We can expect, roughly, potentially, if this expires, maybe an 80 to 90% reduction in cyber threat information flows, like raw flows,” Emily Park, a Democratic staffer on the Senate Homeland Security and Governmental Affairs Committee, said at an event last month. “But that doesn’t say anything about the break in trust that will occur as well, because at its core, CISA 2015, as an authority, is about trust, and being able to trust the businesses and organizations around you, and being able to trust the federal government that it will use the information you share with it.”

That estimate — 80 to 90% — is on the high side of warnings issued by policymakers and others, and some reject the notion that the sky is catastrophically falling should it lapse. Additionally, some of the organizations warning about the fallout from the law’s lapse benefit from its provisions. But there’s near-unanimity that expiration of the law could largely shift decisions about cyber threat info sharing from organizations’ chief information security officers to the legal department.

“If you think about it from the company’s perspective, what a lapse would do would be to cause the ability to share information — to move the decision from the CISO to the general counsel’s office,”  said Amy Shuart, vice president of technology and innovation at Business Roundtable, which considered the issue important enough to fly in CISOs from member companies to meet with lawmakers this summer and persuade them to act. “And any good general counsel is going to say, ‘I used to have authority here that protects us from antitrust. We don’t have it anymore. Now I’ve got concerns.’ So we do anticipate that if this was to lapse, the vast majority of private sector information sharing would shut down just due to legal risk.”

A common expectation among watchers is that Congress is likely to pass a short-term extension that would be attached to an annual spending bill known as a continuing resolution before the end of the current fiscal year, which also is tied to the end of September. But that still gives lawmakers a short window, and even if a short-term extension passes, Hill appropriators are likely to be impatient about a long-term extension and unwilling to aid any extension past the end of December.

Senate Homeland Security and Governmental Affairs Chairman Rand Paul, R-Ky., said last month that he intends to hold a markup of CISA 2015 extension legislation in September. A critic of the Cybersecurity and Infrastructure Security Agency over allegations that it pushed social media outlets to censor election security and COVID-19 data — allegations that then-CISA leaders denied — Paul said he wants to include language in any extension prohibiting the agency known as CISA from censorship.

The new leader of the House Homeland Security Committee, Andrew Garbarino, R-N.Y., also has said reauthorization is a priority, but wants to make other changes to the law as well.

“Reauthorizing the Cybersecurity and Information Sharing Act is essential as the deadline nears and as threats evolve,” Garbarino said in a statement to CyberScoop. “The House Committee on Homeland Security plans to mark up our legislative text for its reauthorization shortly after Congress returns from recess in September. In a 10-year extension, I will preserve the privacy protections in the law, and I aim to provide enhanced clarity to certain pre-existing provisions to better address the evolving threat landscape.”

Separate from the 2015 law, the Justice and Homeland Security departments have issued and updated legal guidance pertaining to cyber threat information sharing that sector-specific information sharing and analysis centers say undergird exchanges from company to company.

But a Supreme Court decision last year about federal regulatory authority could cast a shadow over that guidance should CISA 2015 expire, warned Michael Daniel, leader of the Cyber Threat Alliance. Furthermore, a failure from Congress to act could send a message to courts.

“A lack of congressional action to positively reauthorize private entities to monitor their networks, deploy defensive measures, and share information ‘notwithstanding any other provision of law’ introduces uncertainty about sharing information that could trigger certain criminal laws, such as the Computer Fraud and Abuse Act or the Stored Communications Act, or could violate antitrust laws when participating in collective cyber defense,” he recently wrote. “In short, the resulting uncertainty would reduce the amount of sharing that occurs, reintroduce friction into the system, and inhibit the ability to identify, detect, track, prepare for, or respond to cyber threats.”

Daniel told CyberScoop some of those discussions about expiration fallout are hypothetical at this point, but legal experts have told him they are realistic. 

Trump administration officials and nominees have said they support reauthorization of the 2015 law. There are links to its recent artificial intelligence action plan, which calls for establishment of an AI-ISAC.

“One of the things that we’ve heard the administration say loud and clear about their approach with the [AI] action plan is that they were thinking about what they could do within their existing authorities,” Shuart said. “CISA 2015 is an important existing authority for the action plan to be successful.”

Still, the future of the 2015 law is uncertain.

‘There’s a lot of people kind of searching around for how to do this. I really couldn’t say I know that there’s a consensus,” said Larry Clinton, president of the Internet Security Alliance. “I know that there are people working in multiple different committees — Homeland Security, Armed Services, Appropriations, Intel — who are trying to figure out how to do this. And that’s a good thing, because we want all that support. It’s also a troubling thing because we wind up with too many cooks in the kitchen, and it’s harder to get things done without a consensus on the specifics of what needs to be done, given the tight timeline.”

The post Here’s what could happen if CISA 2015 expires next month appeared first on CyberScoop.

Trump AI plan pushes critical infrastructure to use AI for cyber defense

By: djohnson
23 July 2025 at 13:27

The Trump administration’s new AI Action Plan calls for companies and governments to lean into the technology when protecting critical infrastructure from cyberattacks.

But it also recognizes that these systems are themselves vulnerable to hacking and manipulation, and calls for industry adoption of “secure by design” technology design standards to limit their attack surfaces.

The White House plan, released Wednesday, calls for critical infrastructure owners — particularly those with “limited financial resources” — to deploy AI tools to protect their information and operational technologies.

“Fortunately, AI systems themselves can be excellent defensive tools,” the plan said. “With continued adoption of AI-enabled cyberdefensive tools, providers of critical infrastructure can stay ahead of emerging threats.”

Over the past year, large language models have shown increasing capacity to write code and conduct certain cybersecurity functions at a much faster rate than humans. But they also leave massive security holes in their code architectures and can be jailbroken or overtaken by other parties through prompt injection and data poisoning attacks, or leak sensitive data by accident.

As such, the administration’s plan builds on a previous initiative by the Cybersecurity and Infrastructure Security Agency under the Biden administration to promote “secure by design” principles for technology and AI vendors. That approach was praised in some quarters for bringing industry together to agree to a set of shared security principles. Others rolled their eyes at the entirely voluntary nature of the commitments, arguing that the approach amounted to a pinky promise from tech companies in lieu of regulation. 

The Trump plan states that “all use of AI in safety-critical or homeland security applications should entail the use of secure-by-design, robust, and resilient AI systems that are instrumented to detect performance shifts, and alert to potential malicious activities like data poisoning or adversarial example attacks.”

The plan also recommends the creation of a new AI-Information Sharing and Analysis Center (AI-ISAC) led by the Department of Homeland Security to share threat intelligence on AI-related threats.

“The U.S. government has a responsibility to ensure the AI systems it relies on — particularly for national security applications — are protected against spurious or malicious inputs,” the plan continues. “While much work has been done to advance the field of AI Assurance, promoting resilient and secure AI development and deployment should be a core activity of the U.S. government.”

The plan does not detail how the administration intends to define which entities or systems are “safety-critical” or constitute “homeland security applications.” Nor does it outline how companies or utilities of limited financial means would pay for and maintain AI defensive systems, which are not currently capable of autonomous cybersecurity work without significant human expertise and direction.

The plan proposes no new spending for the endeavor, and other sections are replete with mentions of the administration’s intentions to review and limit or reduce federal AI funding streams to states that don’t share the White House’s broader deregulatory approach.

Grace Gedye, an AI policy analyst for Consumer Reports, said “it’s unclear which state laws will be considered ‘burdensome’ and which federal funds are on the line.”

The plan also calls for the promotion and maturation of the federal government’s ability to respond to active cyber incidents involving AI systems. The National Institute of Standards and Technology will lead an effort to partner with industry and AI companies to build AI-specific guidance into incident response plans, and CISA will modify existing industry guidance to loop agency chief AI officers into discussions on active incidents.

Initial reactions to the plan included business-friendly groups cheering the administration’s deregulatory approach to AI and negative reactions from privacy and digital rights groups, who say the White House’s overall approach will push the AI industry further toward less-constrained, more dangerous and more exploitative models and applications.

Patrick Hedger, director of policy for NetChoice, a trade association for tech companies and online businesses, praised the plan, calling the difference between the Trump and Biden approaches to AI regulation “night and day.”

“The Biden administration did everything it could to command and control the fledgling but critical sector,” Hedger said. “That is a failed model, evident in the lack of a serious tech sector of any kind in the European Union and its tendency to rush to regulate anything that moves. The Trump AI Action Plan, by contrast, is focused on asking where the government can help the private sector, but otherwise, get out of the way.”

Samir Jain, vice president of policy at the Center for Democracy and Technology, said the plan had “some positive elements,” including “an increased focus on the security of AI systems.”

But ultimately, he called the plan “highly unbalanced, focusing too much on promoting the technology while largely failing to address the ways in which it could potentially harm people.”

Daniel Bardenstein, a former CISA official and cyber strategist who led the agency’s AI Bill of Materials initiative, questioned the lack of a larger framework in the action plan for how mass AI adoption will impact security, privacy and misuse by industry.

“The Action Plan talks about innovation, infrastructure, and diplomacy — but where’s the dedicated pillar for security and trust?” Bardenstein said. “That’s a fundamental blind spot.”

 The White House plan broadly mirrors a set of principles laid out by Vice President JD Vance in a February speech, when he started off saying he was “not here to talk about AI safety” and likened it to a discipline dedicated to preventing “a grown man or woman from accessing an opinion that the government thinks is misinformation.”  

In that speech, Vance made it clear the administration viewed unconstrained support for U.S.-based industry as a key bulwark against the threat of Chinese AI domination. Apart from some issues like ideological bias — where the White House plan takes steps to prevent “Woke AI” — the administration was not interested in tying the hands of industry with AI safety mandates.

That deregulatory posture could undermine any corresponding approach to encourage industry to make AI systems more secure.

“It’s important to remember that AI and privacy is more than one concern,” said Kris Bondi, CEO and co-founder of Mimoto, a startup providing AI-powered identity verification services. “AI has the ability to discover and utilize personal information without regard to impact on privacy or personal rights. Similarly, AI used in advanced cybersecurity technologies may be exploited.”

She noted that “security efforts that rely on surveillance are creating their own version of organizational risks,” and that many organizations will need to hire privacy and security professionals with a background in AI systems.

A separate section on the Federal Trade Commission, meanwhile, calls for a review of all agency investigations, orders, consent decrees and injunctions to ensure they don’t “burden AI innovation.”

That language, Gedye said, could be “interpreted to give free rein to AI developers to create harmful products without any regard for the consequences.” 

The post Trump AI plan pushes critical infrastructure to use AI for cyber defense appeared first on CyberScoop.

❌
❌