Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NIST Publishes Guide for Protecting ICS Against USB-Borne Threats

1 October 2025 at 07:16

NIST Special Publication 1334 focuses on reducing cybersecurity risks associated with the use of removable media devices in OT environments.

The post NIST Publishes Guide for Protecting ICS Against USB-Borne Threats appeared first on SecurityWeek.

Why federal IT leaders must act now to deliver NIST’s post-quantum cryptography transition

By: Greg Otto
22 September 2025 at 05:30

In August 2024, the National Institute of Standards and Technology published its first set of post-quantum cryptography (PQC) standards, the culmination of over seven years of cryptographic scrutiny, review and competition. 

As the standards were announced, the implications for cybersecurity leaders were clear: The U.S. government must re-secure its entire digital infrastructure — from battlefield systems to tax records — against adversaries preparing to use quantum computers to break our encryption.

This isn’t a theoretical risk; it’s an operational vulnerability. The cryptography that secures federal data today will be obsolete — NIST has already set a deadline to ban some algorithms by 2035 — and our adversaries know it.

A foundational national security threat

Quantum computers are no longer science fiction — they’re a strategic priority for governments across the United States, Europe, China, and beyond, investing billions in their development. While the technology holds promise for scientific and economic breakthroughs, it also carries significant risks for national security.

If just one adversarial state succeeds in building a large enough quantum computer, it would render RSA, ECC, and other foundational cryptographic systems — the algorithms underpinning federal communications, authentication, and data protection — completely obsolete. This would occur not in years or decades that it would take a classical computer today, but in days.

Even before such computers exist, the risk is clear. Intelligence agencies like the National Security Agency have long warned of “harvest now, decrypt later” attacks. That means sensitive U.S. government data — captured today over insecure links or stolen in data breaches — may be stored in data centers with the intention of being decrypted years from now when quantum capabilities mature. This includes classified material, personally identifiable information, defense logistics data, and more.

We are not talking about theoretical vulnerabilities or bugs. We are talking about a complete systemic failure of classical cryptography in the face of a new computing paradigm, and a long-known one at that.

You’ve been warned and instructed

If you work in federal IT or security and haven’t started quantum-proofing your systems, you are already behind. The U.S. government has made its intentions crystal clear over the past three years. 

National Security Memorandum 10 (NSM-10), under the Biden administration, was signed in 2022 and mandates that all National Security Systems transition to quantum-resistant cryptography by 2030. This was followed by Office of Management and Budget memo M-23-02 in November 2022, which requires all federal civilian agencies to inventory their cryptographic assets, assess quantum vulnerability, and develop transition plans.

These early instructions were cemented in the NSA’s CNSA 2.0 guidelines, stating that systems protecting classified and national security data must move to quantum-safe algorithms before the 2035 deadline, with many systems already transitioned by 2030, using NIST’s approved cryptographic standards.

This is not a proposal; it is federal policy. The deadlines are set. The threat is recognized and the technology is ready.

The scale is unprecedented but not insurmountable

There hasn’t been a cryptographic overhaul of this magnitude since the transition to public-key cryptography in the 1980s and arguably not since Y2K. But unlike Y2K, there is no fixed date when things will fail. There won’t be a headline or official press release when quantum computing arrives. If you’re waiting for a clear signal, you won’t get one — it will simply be here, and those who haven’t prepared will already be behind.

Just as when the Allies broke the Enigma machine, the first nation to build a cryptographically relevant quantum computer is not likely to announce this to the world and their adversaries. 

Quantum-safe transition isn’t as simple as swapping out a cryptographic library. Legacy systems across agencies rely on hardcoded cryptographic protocols. Hardware modules may require firmware upgrades or full replacement. Key management systems will need to be redesigned. Certification and compliance processes must be updated. 

This encryption is found everywhere across the technology supply chain and in everyday life. With so many critical government functions, services, systems and departments now run online, just one weak link in the supply chain could bring the whole network down. 

Under the NSA’s CNSA 2.0 guidelines, any business that wants to do business with the U.S. government must implement PQC, especially for any new technology procurement beyond 2030. Furthermore, any products using the designated vulnerable encryption will be discontinued by 2035.

Most agencies aren’t prepared, and the private sector vendors they depend on are working hard to provide the tools needed to deliver the transition. What we must be careful of is some suppliers marketing “quantum-safe” solutions that do not meet NIST standards and may introduce new vulnerabilities down the line.

What federal IT leaders must do today 

The countdown to 2030 and 2035 has already begun. Federal CIOs, CISOs, and program managers should take the following steps this fiscal year:

  1. Enforce cryptographic discovery mandates. OMB memo M-23-02 requires all agencies to submit an annual inventory of cryptographic systems. If your agency hasn’t complied or gone beyond minimal discovery, it’s time to escalate.
  2. Demand vendor transparency. Your suppliers must tell you when and how they plan to support NIST’s PQC algorithms, not “proprietary” solutions. If they can’t, find new ones.
  3. Fund pilot deployments now. Testing post-quantum algorithms in isolated systems today will reveal architectural bottlenecks and allow for smoother rollout in future years.
  4. Educate procurement teams. Use the NSA’s quantum-safe procurement guidance to ensure RFPs, contracts, and tech refreshes explicitly require PQC readiness.
  5. Treat PQC as a cybersecurity budget line item, not a future capital project. Quantum risk is not hypothetical, it’s live and needs action to address it today.

The bottom line: This is a national defense imperative

You don’t have to believe the quantum hype — you just have to follow your own government’s threat assessments.

 Federal legislation, including the Quantum Computing Cybersecurity Preparedness Act, signed into law in December 2022, requires agencies to prepare for the migration.

If your systems still rely on RSA, ECC, or other legacy algorithms without a transition roadmap,  you are not defending them — you are leaving them open to attack.

The NIST standards show that with one year of progress behind us, there are five years of opportunity ahead.

Ali El Kaafarani is the founder and CEO of PQShield, a global leader in post-quantum cryptography.

The post Why federal IT leaders must act now to deliver NIST’s post-quantum cryptography transition appeared first on CyberScoop.

Top AI companies have spent months working with US, UK governments on model safety

By: djohnson
15 September 2025 at 16:37

Both OpenAI and Anthropic said earlier this month they are working with the U.S. and U.K. governments to bolster the safety and security of their commercial large language models in order to make them harder to abuse or misuse.

In a pair of blogs posted to their websites Friday, the companies said for the past year or so they have been working with researchers at the National Institute of Standards and Technology’s U.S. Center for AI Standards for Innovation and the U.K. AI Security Institute.

That collaboration included granting government researchers access to the  companies’ models, classifiers, and training data. Its purpose has been to enable independent experts to assess how resilient the models are to outside attacks from malicious hackers, as well as their effectiveness in blocking legitimate users from leveraging the technology for legally or ethically questionable purposes.

OpenAI’s blog details the work with the institutes, which studied  the capabilities of ChatGPT in cyber, chemical-biological and “other national security relevant domains.”That partnership has since been expanded to newer products, including red-teaming the company’s AI agents and exploring new ways for OpenAI “to partner with external evaluators to find and fix security vulnerabilities.”

OpenAI already works with selected red-teamers who scour their products for vulnerabilities, so the announcement suggests the company may be exploring a separate red-teaming process for its AI agents.

According to OpenAI, the engagement with NIST yielded insights around two novel vulnerabilities affecting their systems. Those vulnerabilities “could have allowed a sophisticated attacker to bypass our security protections, and to remotely control the computer systems the agent could access for that session and successfully impersonate the user for other websites they’d logged into,” the company said.

Initially, engineers at OpenAI believed the vulnerabilities were unexploitable and “useless” due to existing security safeguards. But researchers identified a way to combine the vulnerabilities with a known AI hijacking technique — which corrupts the underlying context data the agent relies on to guide its behavior — that allowed them to take over another user’s agent with a 50% success rate.  

Between May and August, OpenAI worked  with researchers at the U.K. AI Security Institute to test and improve safeguards in GPT5 and ChatGPT Agent. The engagement focused on red-teaming the models to prevent biological misuse —  preventing the model from providing step-by-step instructions for making bombs, chemical or biological weapons.

The company said it provided the British government with non-public prototypes of its safeguard systems, test models stripped of any guardrails, internal policy guidance on its safety work, access to internal safety monitoring models and other bespoke tooling.

Anthropic also said it gave U.S. and U.K. government researchers access to its Claude AI systems for ongoing testing and research at different stages of development, as well as its classifier system for finding jailbreak vulnerabilities.

That work identified several prompt injection attacks that bypassed safety protections within Claude — again by poisoning the context the model relies on with hidden, malicious prompts — as well as a new universal jailbreak method capable of evading standard detection tools. The jailbreak vulnerability was so severe that Anthropic opted to restructure its entire safeguard architecture rather than attempt to patch it.

Anthropic said the collaboration taught the company that giving government red-teamers deeper access to their systems could lead to more sophisticated vulnerability discovery.

“Governments bring unique capabilities to this work, particularly deep expertise in national security areas like cybersecurity, intelligence analysis, and threat modeling that enables them to evaluate specific attack vectors and defense mechanisms when paired with their machine learning expertise,” Anthropic’s blog stated.

OpenAI and Anthropic’s work with the U.S. and U.K. comes as some AI safety and security experts have questioned whether those governments and AI companies may be deprioritizing technical safety guardrails as policymakers seek to give their domestic industries maximal freedom to compete with China and other competitors for global market dominance.

After coming into office, U.S. Vice President JD Vance downplayed the importance of AI safety at international summits, while British Labour Party Prime Minister Keir Starmer reportedly walked back a promise in the party’s election manifesto to enforce safety regulations on AI companies following Donald Trump’s election. A more symbolic example: both the U.S. and U.K. government AI institutes changed their names this earlier year to remove the word “safety.”

But the collaborations indicate that some of that work remains ongoing, and not every security researcher agrees that the models are necessarily getting worse.

Md Raz, a Ph.D student at New York University who is part of a team of researchers that study cybersecurity and AI systems, told CyberScoop that in his experience commercial models are getting harder, not easier, to jailbreak with each new release.

“Definitely over the past few years I think between GPT4 and GPT 5 … I saw a lot more guardrails in GPT5, where GPT5 will put the pieces together before it replies and sometimes it will say, ‘no, I’m not going to do that.’”

Other AI tools, like coding models “are a lot less thoughtful about the bigger picture” of what they’re being asked to do and whether it’s malicious or not, he added, while open-source models are “most likely to do what you say” and existing guardrails can be more easily circumvented.

The post Top AI companies have spent months working with US, UK governments on model safety appeared first on CyberScoop.

The overlooked changes that two Trump executive orders could bring to cybersecurity

13 August 2025 at 15:04

Two executive orders President Donald Trump has signed in recent months could prove to have a more dramatic impact on cybersecurity than first thought, for better or for worse.

Overall, some of Trump’s executive orders have been more about sending a message than spurring lasting change, as there are limits to their powers. Specifically, some of the provisions of the two executive orders with cyber ramifications — one from March on state and local preparedness generally, and one from June explicitly on cybersecurity — are more puzzling to cyber experts than anything else, while others preserve policies of the prior administration which Trump has criticized in harsh terms. Yet others might fall short of the orders’ intentions, in practice.

But amid the flurry of personnel changes, budget cuts and other executive branch activity in the first half of 2025 under Trump, the full scope of the two cyber-related executive orders might have been somewhat overlooked. And the effects of some of those orders could soon begin coming to fruition as key top Trump cyber officials assume their posts.

The Foundation for Defense of Democracies’ Mark Montgomery said the executive orders were “more important” than he originally understood, noting that he “underestimated” the March order after examining it more closely. Some of the steps would be positive if fully implemented, such as the preparedness order’s call for the creation of a national resilience strategy, he said.

The Center for Democracy & Technology said the June order, which would unravel some elements of executive orders under presidents Joe Biden and Barack Obama, would have a negative effect on cybersecurity.

“Rolling back numerous provisions focused on improving cybersecurity and identity verification in the name of preventing fraud, waste, and abuse is like claiming we need safer roads while removing guardrails from bridges,” said the group’s president, Alexandra Reeve Givens. “The only beneficiaries of this step backward are hackers who want to break into federal systems, fraudsters who want to steal taxpayer money from insecure services, and legacy vendors who want to maintain lucrative contracts without implementing modern security protections.”

The big changes and the in-betweens

Perhaps the largest shift in either order is the deletion of a section of an executive order Biden signed in January on digital identity verification that was intended to fight cybercrime and fraud. In undoing the measures in that section, the White House asserted that it was removing mandates “that risked widespread abuse by enabling illegal immigrants to improperly access public benefits.”

One critic, speaking on condition of anonymity to discuss the changes candidly, said “there’s not a single true statement or phrase or word in” the White House’s claim. The National Security Council did not respond to requests for comment on the order.

Some, though, such as Nick Leiserson of the Institute for Security and Technology, observed that the digital identities language in the Biden order was among the “weakest” in the document, since it only talked about how agencies should “consider” ways to accept digital identities.

The biggest prospective change in the March order was a stated shift for state and local governments to handle disaster preparedness, including for cyberattacks, a notion that drew intense criticism from cyber experts at the time who said states don’t have the resources to defend themselves against Chinese hackers alone. But that shift could have bigger ripples than originally realized.

Errol Weiss, chief security officer at the Health-ISAC, an organization devoted to exchanging threat information in the health sector, said that as the Cybersecurity and Infrastructure Security Agency has scaled back the free services it offers like vulnerability scanning, states would hypothetically have to step into that gap to aid entities like the ones Weiss serves. “If that service goes away, and pieces of it probably already have, there’s going to be a gap there,” he said.

Some of the changes from the March order might only be realized now that the Senate has confirmed Sean Cairncross as national cyber director, or after the Senate takes action on Sean Plankey to lead CISA, said Jim Lewis, a fellow at the Center for European Policy Analysis.

For instance: The order directs a review of critical infrastructure policy documents, including National Security Memorandum 22, a rewrite of a decade-old directive meant to foster better threat information sharing and respond to changing threats. There are already signs the administration plans to move away from that memorandum, a development that a Union of Concerned Scientists analyst said was worrisome, but critics of the memo such as Montgomery said a do-over could be a good thing.

Most of the other biggest potential changes, however, are in the June order. This is a partial list:

  • It eliminates a requirement under the January Biden order that government vendors provide certifications about the security of their software development to CISA for review. “I just don’t think that you can play the whole, ‘We care about cyber,’ and, ‘Oh, by the way, this incredible accountability control? We rolled that back,’” said Jake Williams, director of research and development at Hunter Strategy.
  • It removes another January Biden order requirement that the National Institute of Standards and Technology develop new guidance on minimum cybersecurity practices, thought to be among that order’s “most ambitious prescriptions.”
  • It would move CISA in the direction of implementing a “no-knock” or “no-notice” approach to hunting threats within federal agencies, Leiserson noted.
  • It strikes language saying that the internet data routing rules known as Border Gateway Protocol are “vulnerable to attack and misconfiguration,” something Williams said might ease pressure on internet service providers to make improvements. “The ISPs know it’s going to cost them a ton to address the issue,” he said.
  • It erases a requirement from the Biden order that contained no deadline, but said that federal systems must deploy phishing-resistant multi-factor authentication. 
  • It deletes requirements for pilot projects stemming from the Defense Advanced Research Projects Agency-led Artificial Intelligence Cyber Challenge. DARPA recently completed its 2025 challenge, awarding prize money at this year’s DEF CON cybersecurity conference.
  • It says that “agencies’ policies must align investments and priorities to improve network visibility and security controls to reduce cyber risks,” a change security adviser and New York University adjunct professor Alex Sharpe praised.

Some of the changes led to analysts concluding, alternatively, a continuation or rollback of directives from the January Biden executive order on things like federal agency email encryption or post-quantum cryptography.

The head-scratchers and the mysteries

Some of the moves in the June order perplexed analysts.

One was specifying that cyber sanctions must be limited, in the words of a White House fact sheet, “to foreign malicious actors, preventing misuse against domestic political opponents and clarifying that sanctions do not apply to election-related activities.” The Congressional Research Service could find no indication that cyber sanctions had been used domestically, and said the executive order appears to match prior policy.

Another is the removal of the NIST guidance on minimum cybersecurity practices. “If you’re trying to deregulate, why kill the effort to harmonize the standards?” Sharpe asked. 

Yet another is deletion of a line from the January Biden order to the importance of open-source software. “This is a bit puzzling, as open source software does underlie almost all software, including federal systems,” Leiserson wrote (emphasis his).

Multiple sources told CyberScoop it’s unclear who wrote the June order and whom they consulted with in doing so. One source said some agency personnel complained about the lack of interagency vetting of the document. Another said Alexei Bulazel, the NSC director of cyber, appeared to have no role in it.

Another open question is how much force will be put behind implementing the June order.

It loosens the strictness with which agencies must carry out the directives it lays out, at least compared with the January Biden order. It gives the national cyber director a more prominent role in coordination, Leiserson said. And it gives CISA new jobs.

“Since President Trump took office — and strengthened by his Executive Order in June — CISA has taken decisive action to bolster America’s cybersecurity, focusing on critical protections against foreign cyber threats and advancing secure technology practices,” said Marci McCarthy, director of public affairs for CISA.

California Rep. Eric Swalwell, the top Democrat on the House Homeland Security Committee’s cyber subpanel, told CyberScoop he was skeptical about what the June executive order signalled about Trump’s commitment to cybersecurity.

“The President talks tough on cybersecurity, but it’s all for show,” he said in a statement. “He signed the law creating CISA and grew its budget, but also rolled back key Biden-era protections, abandoned supply chain efforts, and drove out cyber experts. CISA has lost a third of its workforce, and his FY 2026 budget slashes its funding …

“Even if his cyber and AI goals are sincere, he’s gutted the staff needed to meet them,” Swalwell continued. “He’s also made the government less secure by giving unvetted allies access to sensitive data. His actions don’t match his words.”

Montgomery said there was a contradiction between the June order giving more responsibilities to agencies like NIST while the administration was proposing around a 20% cut to that agency, and the March order shifting responsibilities to state and local governments without giving them the resources to handle it.

A WilmerHale analysis said that as the administration shapes cyber policy, the June order “signals what that approach is likely to be: removing requirements perceived as barriers to private sector growth and expansion while preserving key requirements that protect the U.S. government’s own systems against cyber threats posed by China and other hostile foreign actors.”

For all of the changes it could make, analysts agreed the June order does continue a fair number of Biden administration policies, like commitments to the Cyber Trust Mark labeling initiative, space cybersecurity policy and requirements for defense contractors to protect sensitive information.

Some of those proposals didn’t get very far before the changeover from Biden to Trump. But it might be easier for the Trump administration to achieve its goals.

“It’s hard to say the car is going in the wrong direction when they haven’t started the engine,” Lewis said. “These people don’t have the same problem, this current team, because they’re stripping stuff back. They’re saying, ‘We’re gonna do less.” So it’s easier to do less.”

The post The overlooked changes that two Trump executive orders could bring to cybersecurity appeared first on CyberScoop.

❌
❌