Reading view

There are new articles available, click to refresh the page.

FCC tightens KYC rules for telecoms, closes loophole for banned foreign services

The Federal Communications Commission approved new regulations Wednesday designed to crack down on robocalling, protect telecommunications networks from cyberattacks and further vet equipment-testing labs based overseas.

Commissioners unanimously passed a measure to strengthen telecom companies’ “Know Your Customer” requirements for verifying callers’ identities. Among the potential solutions being considered are requiring telecoms to verify a customer’s name, address, government ID and alternative phone numbers prior to enabling their service.

In a statement ahead of the vote, FCC Chair Brendan Carr said that under current rules some telecoms “do the bare minimum” to verify callers and have “become complicit in illegal robocalling schemes.”

“As we have continued to investigate the problem of illegal robocalls over the last year, it has become clear that some originating providers are not doing enough to vet their customers, allowing bad actors to infiltrate our U.S. phone networks,” he said.

Current rules require telecoms to take “affirmative, effective” measures to verify callers and block illegal calls, but in practice this system has largely relied on self-attestation from the companies. Because a single call can traverse multiple networks, carriers must also often rely on identity verification performed by other telecoms.

For example, the telecom that transmitted thousands of false robocalls imitating then-President Joe Biden during the 2024 New Hampshire presidential primary initially reported to the FCC that they had the highest level of confidence in the identity of those using the phone numbers. That turned out to be false, as the robocallers spoofed a well-known former state Democratic Party official.

Unsurprisingly, the commission is also interested in finding ways to better enforce Know Your Customer rules, including tying penalties to the number of illegal calls that were placed.

Since 1999, the FCC has traditionally granted blanket authorization for domestic carriers to operate interstate telecommunications services within U.S. borders. Another rule passed by the commission today would formally end that practice for foreign companies on the FCC’s covered entity list.  

The list bans a small number of foreign companies based in Russia or China from selling their equipment in the U.S. on national security grounds, but Carr said equipment from those companies often wind up in U.S. products by providing services that don’t fall under the current legal definition of international telecommunications authority.

Commissioner Olivia Trusty, who helped lead the development of the rule, said cybersecurity threats facing telecom networks today “exceed those of any recent era” and that updates must be made to modernize and harden networks.

“In response to these growing hostilities, it is imperative that we re-examine policies that permit access to U.S. networks to ensure that frameworks originally designed to promote economic growth are not exploited in ways that jeopardize our national and economic security,” Trusty said in a statement after the vote passed.

The FCC also passed a third measure that would refuse to recognize any testing or equipment lab based overseas that does not have a reciprocity agreement in place with U.S.-based labs. The rule builds off efforts last year to prohibit telecoms from relying on testing and certification labs that are owned or operated by foreign adversarial countries like China or Russia, which led to the FCC withdrawing or denying certification of 23 overseas labs.

The post FCC tightens KYC rules for telecoms, closes loophole for banned foreign services appeared first on CyberScoop.

AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it

Wade Woolwine is Senior Director, Product Security at Rapid7.

The headlines around Glasswing have focused on how quickly AI can surface vulnerabilities, which has naturally caught the attention of security leaders. In my conversations with teams and customers, the more useful discussion has been about what that speed means in practice for business protection, especially across open source risk, dependency choices, and software supply chain resilience. The deeper issue for security leaders sits elsewhere. 

Software risk is becoming harder to manage across the full lifecycle, especially in open source dependencies, build pipelines, developer environments, and the operational processes that sit between disclosure and remediation. When vulnerabilities can be found faster and at greater depth, security teams need more than another source of findings. They need a stronger way to understand what they run, what they trust, what they can patch quickly, and where a single weak dependency can create disproportionate risk.

Faster discovery makes software supply chain resilience a more immediate leadership issue. CISOs need a clearer view of how dependencies are chosen, monitored, validated, and governed across production, build, and developer environments, especially as open source remains essential to modern software development.

Organizations already struggle to absorb vulnerability disclosures at the pace they are coming in, because when discovery gets faster, the operational gap widens between knowing there is a problem and being able to do something useful about it. That gap is especially serious in the software supply chain, where a single dependency can introduce risk into build systems, production workloads, developer endpoints, and the tools used to secure them.

This is why I would frame AI-driven vulnerability discovery risk as a lifecycle challenge. The pressure does not sit in one place, but across inventory, dependency decisions, threat intelligence, patching discipline, and validation – with people, process, and visibility shaping how well an organization can respond. Technology matters, but it cannot compensate for a weak operating model underneath it.

Open source still matters. Dependency choices matter more.

Open source remains essential to modern software development because it helps teams move faster and get products to market without rebuilding common functionality from scratch. The better response is to be more deliberate about where and how third-party code enters the environment. 

Open source has always involved a trade-off between speed, efficiency, flexibility, and inherited risk, and that trade-off becomes harder to manage as AI makes code review deeper and faster. More flaws and supply chain compromises will likely be found in packages that teams have trusted for years, including transitive dependencies most developers did not knowingly choose. One only needs to look back a few weeks to find that the widely used Axios package suffered a supply chain compromise that bundled a Remote Access Trojan (RAT) charged with stealing secrets. That raises the value of understanding which dependencies are essential, which ones can be removed, which ones pull in large chains of transitives, and which ones are maintained by too few people to inspire confidence.

That work starts with a more disciplined question than “Is there a package that does this?” It starts with “Do we need this dependency, and do we understand the risk that comes with it?” The safest dependency is often the one that never enters the environment in the first place.

Why inventory has to go deeper than package lists

Supply chain resilience begins with knowing what you are actually running, which sounds straightforward until a critical disclosure lands in a package no one realized was in the environment three layers deep. Dependency graphs are deeper than most teams think, and transitive risk is where a lot of operational pain begins. A package chosen directly by a developer may bring in dozens of additional packages, each with its own maintainers, release cadence, security posture, and potential failure points.

A mature approach to inventory needs to move beyond a static package list, because CISOs need confidence in three views at once: What is declared in source, what is resolved and built, and what is actually running in production? Those views often drift apart over time, which means a package can be patched in source and still remain unpatched in a deployed container or runtime environment. An SBOM on its own will not close that gap; continuous, usable inventory will.

That inventory also needs clear ownership attached to it, because the moment a critical dependency is identified, someone has to decide what happens next, coordinate the change, and absorb the operational consequences. Security teams cannot do that well if responsibility is unclear, which is why ownership needs to be treated as part of resilience rather than an administrative detail.

Build pipelines and developer environments deserve the same scrutiny as production

Supply chain conversations still tend to start with production systems, even though recent incidents have shown how quickly compromise can move through the build layer, developer tooling, or the security tooling inside the pipeline itself. Those environments hold code, secrets, and trust relationships that attackers know how to exploit, while developer workstations often carry a rich mix of credentials and elevated privileges because speed matters to the business. Build systems are predictable and privileged, which makes them both valuable and vulnerable, but also easier to monitor.

Seeing those layers as part of the same attack surface means asking harder questions about how code enters the build, how package updates are governed, how actions and dependencies are pinned, what secrets exist in CI/CD, and what controls are in place on developer endpoints to detect anomalous behavior or stop high-risk package activity before it goes unnoticed.

You can gauge the maturity of the operating model with the answers to a few basic questions:

  • How tightly are dependencies controlled in CI?

  • How are package lifecycle scripts governed?

  • What secrets exist in CI/CD, and what protections surround them?

  • What visibility exists into anomalous behavior on developer endpoints?

  • How would the team detect or prevent high-risk package activity before it spreads?

If those answers are unclear, important parts of the model are still missing.

Why prioritization matters more as scanning accelerates

When software risk rises, the instinct is often to add another scanner because more visibility feels like progress. What matters more over time, though, is how well teams can prioritize the findings that follow, assign them to the right owner, choose the right mitigation, and prove that exposure actually went down. Broader scanning and faster discovery mostly add to the pile unless the operating model behind them is strong enough to turn findings into action. Feed more issues into a process that is already stretched and the backlog grows, priorities become harder to sort, and remediation slows in the places where speed matters most. The organizations that come through this period well will be the ones that treat supply chain resilience as a systems problem, with stronger intake, clearer governance, better intelligence, and faster paths from alert to action.

What stronger software supply chain resilience looks like in practice

A stronger response starts with a deeper inventory of dependencies across source, build, and runtime, so teams can see both direct and transitive packages and connect them back to real environments and real owners. Once that picture is in place, intelligence monitoring becomes far more useful when it runs continuously against credible signals on vulnerabilities, package risk, maintainer health, end-of-life software, and unusual changes in dependency behavior.

The same level of care needs to carry through into dependency governance, where better decisions depend on asking whether a new package is necessary, how much transitive risk it introduces, whether its maintenance model is healthy, and what policy governs its path into production. Build and developer controls belong in that same conversation, because version pinning, private registries, secret handling, script restrictions, immutable builds, ephemeral runners, and stronger endpoint monitoring all reduce the attack surface around the software supply chain.

Monitoring threat intelligence for notifications about new vulnerabilities and compromised packages and having a well defined and practiced process for scoping and remediating emerging threats becomes critical. Your supply chain vulnerability and compromise response should be practiced – just like your incident response plan – through table top exercises and simulated threat events. You don’t want to wait until the house is on fire to know how to execute an effective response.

Similarly, Engineering, DevOps, and Security teams should collaborate on establishing a trust and reputation scoring mechanism for supply chain dependencies. Being able to evaluate the speed of response, transparency of communication and updates, and ultimate resolution of the vulnerability or compromise speak volumes for how much you can trust the maintainers of the software you depend on. The OpenSSF Scorecard project offers a great place to start evaluating the open source packages you’re already using.

Organizations should also have a fallback plan for when obtaining a security patch is not available. Some options to consider include exploring other open source packages that perform similar functions, exploring other mitigations such as application firewalling, or even forking and contributing a security patch back to the community.

Validation closes the loop by showing whether the artifact came from where it was supposed to, whether the package has drifted in unexpected ways, and whether the mitigations applied are reducing live risk rather than simply documenting the process.

How CISOs should think about the next 12 months

The strain on security teams is only growing, and the potential for AI to relieve some of that pressure is understandably compelling, especially when boards, CEOs, and CFOs are asking how the organization plans to adopt it. That makes this a leadership question as much as a technology one. CISOs need a clear point of view on where AI can genuinely improve resilience, where it still introduces too much uncertainty, and how to explain those choices in business terms.

If software engineering teams are already adopting AI-assisted development, security teams should be part of that conversation early, especially around dependency management. I have seen teams begin connecting AI coding agents to vulnerability management workflows so those agents can interpret vulnerabilities found in the code base, assess reachability with more context, help plan remediation, and validate updates much faster than traditional handoffs usually allow. Used well, that can reduce drag across the workflow and help teams move faster on classes of issues that are currently slowing them down.

Getting there safely still depends on the foundation underneath it. A more resilient path starts with a clearer picture of the environment and a more complete inventory of dependencies across source, build, and runtime. From there, ownership needs to be explicit, threat and vulnerability intelligence needs to be embedded into how the organization prioritizes, and dependency sprawl needs to be reduced with more discipline around what actually enters production. The same mindset should carry through to the build layer and developer endpoints, where tighter controls and better visibility help reduce unnecessary exposure, while faster and more repeatable paths from disclosure to action make it easier for teams to respond before risk compounds.

That foundation will matter regardless of which AI model or platform becomes dominant six or twelve months from now. It will also matter if the next wave of AI makes backlog reduction, lower-tier remediation, or patch validation more practical. Organizations that know what they run and how they operate will be in a much better position to adopt those capabilities with intent.

The shift security leaders should make now

Security in an AI-accelerated world needs to be managed as a systems challenge, with supply chain resilience shaped by how well organizations connect software composition, exposure visibility, dependency governance, threat intelligence, build integrity, endpoint controls, remediation workflows, and validation. When those layers are treated separately, gaps open quickly; when they are tied together through a stronger operating model, teams are in a much better position to absorb faster discovery without losing control of the response.

For CISOs, that means continuing to use open source with a more deliberate view of dependency risk, reducing unnecessary packages where possible, knowing what is running and who owns it, and monitoring threat and vulnerability intelligence with enough discipline to act before the queue overwhelms the team. It also means paying closer attention to the attack surface across production, build, and developer environments, while treating AI as something that will amplify both the strengths and the weaknesses already present in the program. Faster discovery is here, and the organizations that handle it best will be the ones that can respond with the same level of discipline.

Vercel attack fallout expands to more customers and third-party systems

Vercel said the fallout from an attack on its internal systems hit more customers than previously known, as ongoing analysis uncovered additional evidence of compromise

The company, which makes tools and hosts cloud infrastructure for developers, maintains a “small number” of accounts were impacted, but it has yet to share a number or range of known incidents linked to the attack. Vercel created and maintains Next.js, a platform supporting AI agents that’s downloaded more than 9 million times per week, and other popular open-source projects. 

Vercel CEO Guillermo Rauch said the company and partners have analyzed nearly a petabyte of logs across the Vercel network and API, and learned malicious activity targeting the company and its customers extends beyond an initial attack that originated at Context.ai. 

“Threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers,” Rauch said in a post on X

“Once the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables,” he added.

The attack exemplifies the widespread and compounded risk posed by interconnected systems that rely on OAuth tokens, trusted relationships and overly privileged permissions linking multiple services together.

“The real vulnerability was trust, not technology,” Munish Walther-Puri, head of critical digital infrastructure at TPO Group, told CyberScoop. “OAuth turned a productivity app into a backdoor. Every AI tool an employee connects to their work account is now a potential attack surface.”

An attacker traversed Vercel’s internal systems to steal and decrypt customer data, including environment variables it stored, posing significant downstream risk. 

The company insists the breach originated at Context.ai, a third-party AI tool used by one of its employees. Researchers at Hudson Rock previously said the seeds of that attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments. 

Vercel has not specified the systems and customers data compromised, nor has it described the threat eradicated or contained. The company said it’s found no evidence of tampering across the software packages it publishes, concluding “we believe the supply chain remains safe.” 

The company fueled further intrigue in its updated security bulletin, noting that it also identified a separate “small number of customers” that were compromised in attacks unrelated to the breach of its systems. 

“These compromises do not appear to have originated on Vercel systems,” the company said. “This activity does not appear to be a continuation or expansion of the April incident, nor does it appear to be evidence of an earlier Vercel security incident.”

It’s unclear how Vercel became aware of those attacks and why it’s disclosing them publicly. 

Vercel declined to answer questions, and Mandiant, which is running incident response and an investigation into the attack, referred questions back to Vercel. 

Vercel has not attributed the breach to any named threat group or described the attackers’ objectives. 

An online persona identifying themselves as ShinyHunters took responsibility for the attack and is attempting to sell the stolen data, which they claim includes access keys, source code and databases. Austin Larsen, principal threat analyst at Google Threat Intelligence Group, said the attacker is “likely an imposter,” but emphasized the risk of exposure is real.

Walther-Puri warned that the downstream blast radius from the attack on its systems remains undefined. “Stolen API keys and source code snippets from internal views are potentially keys to customer production environments,” he said.

The stolen data attackers claim to have “sounds almost boring … but it’s infrastructure intelligence,” Walther-Puri added. “The right environment variable doesn’t just unlock a system — it lets adversaries become that system, silently, from the inside.”

The post Vercel attack fallout expands to more customers and third-party systems appeared first on CyberScoop.

Vercel’s security breach started with malware disguised as Roblox cheats

Vercel customers are at risk of compromise after an attacker hopped through multiple internal systems to steal credentials and other sensitive data, the company said in a security bulletin Sunday. 

The attack, which didn’t originate at Vercel, showcases the pitfalls of interconnected cloud applications and SaaS integrations with overly privileged permissions. 

An attacker traversed third-party systems and connections left exposed by employees before it hit the San Francisco-based company that created and maintains Next.js and other popular open-source libraries. 

Researchers at Hudson Rock said the seeds of the attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments.

Each of the companies are pinning at least some blame for the attack on the other vendor.

Context.ai on Sunday said that breach allowed the attacker to access its AWS environment and OAuth tokens for some users, including a token for a Vercel employee’s Google Workspace account. Vercel is not a Context customer, but the Vercel employee was using Context AI Office Suite and granted it full access, the artificial intelligence agent company said. 

“The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as sensitive,” Vercel said in its bulletin. 

The company said a limited number of its customers are impacted and were immediately advised to rotate credentials. Vercel, which declined to answer questions, did not specify which internal systems were accessed or fully explain how the attacker gained access to Vercel customers’ credentials. 

Vercel CEO Guillermo Rauch said customer data stored by the company is fully encrypted, yet the attacker got further access through enumeration, or by counting and inventorying specific variables. 

“We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI,” he said in a post on X. “They moved with surprising velocity and in-depth understanding of Vercel.”

A threat group identifying itself as ShinyHunters took responsibility for the attack in a post on Telegram and is attempting to sell the stolen data, which they claim includes access keys, source code and databases.

The attacker “is likely an imposter attempting to use an established name to inflate their notoriety,” Austin Larsen, principal threat analyst at Google Threat Intelligence, wrote in a LinkedIn post. “Regardless of the threat actor involved, the exposure risk is real.”

Vercel also warned that the attack on Context’s Google Workspace OAuth app “was the subject of a broader compromise, potentially affecting its hundreds of users across many organizations.” It published indicators of compromise and encouraged customers to review activity logs, review and rotate variables containing secrets.

Context and Vercel said their separate and coordinated investigations into the attack aided by CrowdStrike and Mandiant remain underway.

The post Vercel’s security breach started with malware disguised as Roblox cheats appeared first on CyberScoop.

❌