Reading view

There are new articles available, click to refresh the page.

FCC tightens KYC rules for telecoms, closes loophole for banned foreign services

The Federal Communications Commission approved new regulations Wednesday designed to crack down on robocalling, protect telecommunications networks from cyberattacks and further vet equipment-testing labs based overseas.

Commissioners unanimously passed a measure to strengthen telecom companies’ “Know Your Customer” requirements for verifying callers’ identities. Among the potential solutions being considered are requiring telecoms to verify a customer’s name, address, government ID and alternative phone numbers prior to enabling their service.

In a statement ahead of the vote, FCC Chair Brendan Carr said that under current rules some telecoms “do the bare minimum” to verify callers and have “become complicit in illegal robocalling schemes.”

“As we have continued to investigate the problem of illegal robocalls over the last year, it has become clear that some originating providers are not doing enough to vet their customers, allowing bad actors to infiltrate our U.S. phone networks,” he said.

Current rules require telecoms to take “affirmative, effective” measures to verify callers and block illegal calls, but in practice this system has largely relied on self-attestation from the companies. Because a single call can traverse multiple networks, carriers must also often rely on identity verification performed by other telecoms.

For example, the telecom that transmitted thousands of false robocalls imitating then-President Joe Biden during the 2024 New Hampshire presidential primary initially reported to the FCC that they had the highest level of confidence in the identity of those using the phone numbers. That turned out to be false, as the robocallers spoofed a well-known former state Democratic Party official.

Unsurprisingly, the commission is also interested in finding ways to better enforce Know Your Customer rules, including tying penalties to the number of illegal calls that were placed.

Since 1999, the FCC has traditionally granted blanket authorization for domestic carriers to operate interstate telecommunications services within U.S. borders. Another rule passed by the commission today would formally end that practice for foreign companies on the FCC’s covered entity list.  

The list bans a small number of foreign companies based in Russia or China from selling their equipment in the U.S. on national security grounds, but Carr said equipment from those companies often wind up in U.S. products by providing services that don’t fall under the current legal definition of international telecommunications authority.

Commissioner Olivia Trusty, who helped lead the development of the rule, said cybersecurity threats facing telecom networks today “exceed those of any recent era” and that updates must be made to modernize and harden networks.

“In response to these growing hostilities, it is imperative that we re-examine policies that permit access to U.S. networks to ensure that frameworks originally designed to promote economic growth are not exploited in ways that jeopardize our national and economic security,” Trusty said in a statement after the vote passed.

The FCC also passed a third measure that would refuse to recognize any testing or equipment lab based overseas that does not have a reciprocity agreement in place with U.S.-based labs. The rule builds off efforts last year to prohibit telecoms from relying on testing and certification labs that are owned or operated by foreign adversarial countries like China or Russia, which led to the FCC withdrawing or denying certification of 23 overseas labs.

The post FCC tightens KYC rules for telecoms, closes loophole for banned foreign services appeared first on CyberScoop.

FCC pushes new rules to crack down on robocallers, foreign call centers

The Federal Communications Commission is moving to crack down on illegal robocalls and the use of foreign call centers.

At a meeting Thursday, the three-member commission unanimously approved a new proposed regulation to increase certification and disclosure requirements for obtaining phone numbers, while also expanding those same requirements to all providers seeking phone numbers from the North American Numbering Plan Administrator and resellers.

The rule – which will be shaped through public comments – is meant to make it more difficult for spammers, scammers and other illegal robocallers to obtain legitimate phone numbers. The FCC’s Office of Communications said a majority of the agency’s investigations into illegal robocalling have involved resold numbers.

It would also impose stricter disclosure requirements on telecoms about the callers on their networks and their identities, information that will assist organizations like the Industry Traceback Group track and identify robocallers as their calls hop across the nation’s patchwork, decentralized telephone networks.

Commissioner Anna Gomez said the proposed rules would help raise the bar for bad actors to obtain valid phone numbers and help close gaps in reporting that make it harder for industry and regulators to find and expunge robocallers from networks.

“Right now, bad actors are exploiting gaps in a phone number system that was designed for a simpler time,” Gomez said.

The commission plans to explore a range of solutions to strengthen numbering requirements and policies, including cracking down on common tactics that rely heavily on resold numbers — like number cycling where “service providers churn through large quantities of telephone numbers [on] a rotating and even single-use basis to evade detection.”

Commissioner Olivia Trusty said that while changes in technology and the marketplace have brought significant benefits to consumers, it has also “made it more difficult to identify who is using telephone numbers and for what purposes, complicating both robocall enforcement and numbering administration.”

Last month, the FCC finalized regulations that require telecoms to annually certify that their caller information is accurate and provide updated information to the agency’s Robocall Mitigation Database. 

A separate proposed regulation passed by the commission Thursday would place new restrictions on the ability of U.S. telephone providers to outsource their call-center services to foreign countries. It specifically asks about the feasibility of giving consumers the option to require that their calls be routed to U.S.-based call centers, requiring calls involving “certain types of sensitive information” to be processed at U.S. locations, requiring providers to disclose the use of overseas centers to callers during a call and requiring operators to speak proficient English.

FCC Chair Brendan Carr touted the initiative as part of the Trump administration’s stated efforts to convince American companies to onshore more of their services in the U.S.

But organizations like the AARP have also found that overseas call centers operating outside of U.S. or international law play a big role in the nation’s robocalling epidemic. In a press conference after the meeting, Carr echoed that sentiment, claiming that some criminal scammers plaguing Americans today first broke into the industry by working at outsourced call centers.

“I think it also helps us crack down on some of the illegal robocallers,” Carr said about the new onshoring rules. “At the end of the day, I think American callers should expect and deserve to reach American call centers.”

The post FCC pushes new rules to crack down on robocallers, foreign call centers appeared first on CyberScoop.

Critics call FCC router rule a ‘big swing’ that could create more supply chain uncertainty

The Federal Communications Commission’s move to ban foreign-made routers touches on a real threat, but critics say the agency rule is overly broad, practically unworkable and doesn’t meaningfully address weaknesses in router security that have led to major breaches on American governments and businesses.

Under the Secure Equipment Act and Secure Networks Act, the FCC may ban foreign technology manufacturers if they are deemed a national security risk. But the federal government has almost always opted to narrowly target specific foreign companies with known or problematic connections to foreign adversaries, like Chinese telecom Huawei or Russian antivirus firm Kaspersky Labs.

The restrictions announced Monday, however, simply ban all routers “produced in a foreign country” except those granted conditional approval by the departments of Defense or Homeland Security.

The order imposes a sweeping and immediate halt to the purchase of non-American routers and Wi-Fi services for government agencies and businesses, along with unanswered questions about where to buy next and what to do with the foreign devices already embedded in their networks.

In justifying the decision, FCC Chair Brendan Carr cited a March 20 White House-led interagency report that concluded foreign-made routers pose “unacceptable” risks to U.S. national security. 

“Following President Trump’s leadership, the FCC will continue [to do] our part in making sure that U.S. cyberspace, critical infrastructure, and supply chains are safe and secure,” Carr said. 

U.S. policymakers have worried about the potential cybersecurity risks of relying on technology and equipment from countries like China or Russia, where local laws compel domestic companies to cooperate in national security investigations and hand over sensitive data. 

In 2024, members of Congress called for the Department of Commerce to investigate Chinese Wi-Fi and router makers like TP-Link, alleging the company’s “unusual degree of vulnerabilities and required compliance with [Chinese] law” amounted to an unacceptable national security risk.

Last year, five House Republican committee chairs urged Commerce Secretary Howard Lutnick to use the department’s authority “to eliminate products and services created by China and other foreign adversaries from domestic supply chains that are shown to have the potential to introduce security vulnerabilities.” An attached list of industries “needing immediate action” included routers and Wi-Fi, while mentioning TP-Link and Huawei as “Chinese or Chinese-controlled” entities.

While router insecurity is a major problem, it’s worth noting that American-made products are far from immune to foreign hacking. Major Chinese hacking campaigns, such as Salt Typhoon, succeeded not because of backdoors in Chinese-made tech but through the exploitation of known, previously reported vulnerabilities in U.S. and Western products.  

One former U.S. intelligence leader told CyberScoop that country of origin matters more when you’re dealing with an adversary like China, which has national security and vulnerability disclosure laws that require Chinese router companies to disclose cybersecurity vulnerabilities to the government first.

But it’s not just Chinese routers, or those made by America’s direct rivals, that concern intelligence officials.

Even in a global, digitally connected world, proximity still matters. Foreign countries can more easily disrupt or infect the supply chain of neighboring or bordering countries that may rely on similar parts, components or internet infrastructure.

“Attackers have so many options with what can be done with router access. [It’s] even easier if you have the country that runs and accesses them in your backyard,” said the official, who requested anonymity to speak candidly.

Investors may be drawing similar conclusions. Notably, stocks for Asian router companies fell following the FCC announcement, while U.S. company NetGear, which does not rely on Chinese supply chains, saw its shares jump 12%.  

A new point of leverage

The broad nature of the order — along with the ability to dole out exemptions to specific companies at will — effectively resets the regulatory relationship between foreign router companies and the U.S. government. Under it, each company with manufacturing operations in China or overseas would have to petition the FCC for an exemption to the rule.

The ambiguity behind what, specifically, a company would need to do to obtain an exemption could open the process up to potential abuse or political patronage, experts said.

A former FCC official told CyberScoop they were puzzled by the move, and questioned whether it was related to national security or if it would even pass legal muster in the courts.

Instead of adding targeted companies with foreign ties or a history of cybersecurity vulnerabilities to the list of banned providers — as the government has done and successfully defended in court in the past — the FCC instead sought to ban all foreign-made routers around the globe. That represents a potentially significant disruptive action to take in an environment where many businesses and governments today use TP-Link and other foreign companies for their internet needs. 

The net effect is “actually creating a new federal program of conditional approvals” for foreign router companies, the FCC alum said, one that is so broad it would take a massive combined federal effort to effectively remove bad actors from the foreign supply chain.

“I have a hard time believing that this administration — given what we’ve seen at CISA and other agencies and the mass departures — will actually roll out a sophisticated and tailored program to adequately address this kind of huge swing of an entire base of consumer products,” said the official, who was granted anonymity to speak candidly.

The official pointed to an attempt earlier this year by the FCC to ban imports of foreign drone components, saying there were similar “big swing” parallels to the legal rationale here. The drone ban is currently being challenged in court, and the official said they expect the FCC’s router order to be subject to similar lawsuits from companies.

Earlier this month, Carr also proposed new regulations that would place English language requirements on offshore call centers and asked the public for insight on potential policies to “encourage” companies to set up U.S.-based call centers, “including limits on call volume from overseas call centers.”

Carr said the FCC was also “opening up a new front in our efforts to block illegal robocalls from abroad by examining the targeted use of tariffs or bonds.”

The former FCC official said Carr’s prioritization on novel application of tariff authorities while discussing the implementation of two laws — the TRACED Act and the Truth In Caller ID Act — that are unrelated to trade makes it impossible to disentangle the agency’s genuine national security concerns from the Trump administration’s broader attempts to gain leverage over foreign companies in their trade fights.

“Those are weird kind of random hops that seem to be in response to this broader picture of the big tariff decision that came out,” the official said.

The post Critics call FCC router rule a ‘big swing’ that could create more supply chain uncertainty appeared first on CyberScoop.

The long-awaited Trump cyber strategy has arrived

President Donald Trump released his administration’s cyber strategy Friday, promoting offense operations in cyberspace, securing federal networks and critical infrastructure, streamlining regulations, leveraging emerging technologies and strengthening the cybersecurity workforce.

Trump also signed an executive order Friday directing agencies to take action to combat cybercrime and fraud.

A little more than half of the five pages of strategy text of the long-anticipated document is preamble, and two of its seven pages are title and ending pages. Administration officials have said the strategy is deliberately high-level, and the White House promised more detailed guidance in the future.

The strategy “calls for unprecedented coordination across government and the private sector to invest in the best technologies and continue world-class innovation, and to make the most of America’s cyber capabilities for both offensive and defensive missions,” the White House said in a statement accompanying its release.

Each of the six “pillars” of the strategy offer some prescriptions.

“Shaping adversary behavior” calls for using U.S. government offensive and defensive capabilities in cyberspace, as well as incentivizing the private sector to disrupt adversary networks.

It also says Trump will “counter the spread of the surveillance state and authoritarian technologies that monitor and repress citizens,” even as administration critics argue that his administration has fostered surveillance and repression against U.S. citizens.

The shortest pillar, “promote common sense regulation,” decries rules that are only “costly checklists.” The Biden administration expanded cyber regulations, spurring some industry resistance. But the Trump pillar does talk about addressing liability, a point of emphasis for the prior administration as well.

“Modernize and secure federal networks” talks about using concepts and technologies like post-quantum cryptography, artificial intelligence, zero-trust and lowering barriers for vendors to sell tech to the government to meet those goals.

To “secure critical infrastructure,” the strategy calls for fortifying not just owners and operators but also the supply chain, in part by focusing on U.S.-made rather than adversary-made products.

“We will deny our adversaries initial access, and in the event of an incident, we must be able to recover quickly,” the strategy reads. “We will galvanize the role of state, local, Tribal, and territorial authorities as a complement to— not a substitute for — our national cybersecurity efforts.” Some critics of the administration’s cybersecurity actions have contended that it has shifted the burden to state and local governments too much.

AI usage makes up the bulk of the pillar entitled “sustain superiority in critical and emerging technologies,” in addition to reflecting earlier parts of the strategy on the topics of quantum cryptography and privacy protection. That includes the protection of data centers, the subject of localized fights across the country over their location and resource costs.

The final pillar says the United States must “build talent and capability,” after a year of the administration cutting a significant number of cyber positions in the federal government. “We will eliminate roadblocks that prevent industry, academia, government, and the military from aligning incentives and building a highly skilled cyber workforce,” it states.

Some positive reviews rolled in about the strategy despite the late-Friday afternoon release, traditionally the time of week when an administration looks to publish news it hopes will garner little attention.

“As new and more sophisticated threats emerge, America needed a new national cyber strategy that captures the urgency of this moment,” USTelecom President and CEO Jonathan Spalter said in a news release. “The President’s strategy rightly recognizes that harnessing America’s unique mix of private-sector innovation with public-sector capacity is the best deterrence.”

Frank Cilluffo, Director of the McCrary Institute for Cyber and Critical Infrastructure Security at Auburn University, was struck by the focus on deterrence: “This unified strategy determining a direction on offensive and defensive cyber operations and collaboration couldn’t be more timely.”

The Business Software Alliance cheered the call for streamlining cyber regulations, in particular.

A number of cyber vendors took note of the passages on AI. “Redirecting resources from paperwork to AI-powered security capabilities is the only way to keep pace with modern threats and adversaries who operate at great speed,” said Bill Wright, global head of government affairs at Elastic. “This strategy appears to recognize that fundamental truth.”

Not all the reviews were flattering, however, including from the top Democrat on the House Homeland Security Committee, Bennie Thompson, who said the strategy’s “underachieving” was the only thing impressive about it.

“What little ‘substance’ does exist in this pamphlet is a mishmash of vague platitudes, a long catalogue of ‘we will’ statements that may or may not match the Administration’s current behavior, and, mercifully, an apparent extension of some Biden-era policies,” he said. “Completely lacking is even the most basic blueprint for how the Administration will go about achieving any of its cybersecurity goals — an objective possibly hamstrung by the hemorrhage in cyber talent across all Federal agencies since Trump took office.”

The executive order Trump signed Friday coincides with the release of the strategy but there’s little overlap between the subject matter; the strategy makes one mention of cybercrime.

The order directs the attorney general to prioritize prosecution of cybercrime and fraud, orders agencies to review tools that they could use to counter international criminal organizations and  gives the Department of Homeland Security marching orders to improve training, in addition to other steps, according to a fact sheet.

“President Trump is unleashing every available tool to stop foreign-backed criminal networks that exploit vulnerable Americans through cyber-enabled fraud and extortion,” the fact sheet states.

The post The long-awaited Trump cyber strategy has arrived appeared first on CyberScoop.

CISA to host industry feedback sessions on cyber incident reporting regulation

The Cybersecurity and Infrastructure Security Agency will hold sector-by-sector town halls in the coming weeks to get feedback on a stalled regulation requiring critical infrastructure owners and operators to report when they suffer major cyberattacks.

The meeting dates, set to be published in the Federal Register Friday, would “allow external stakeholders a limited additional opportunity to provide input on refining the scope and burden” of a proposed rule that CISA is advancing as part of the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) that Congress enacted in 2022.

That law requires critical infrastructure owners and operators to notify CISA within 72 hours when they are hit with a significant cyberattack and within 24 hours when they make a ransomware payment.

But defining what entities  the law would specifically cover and how has been a point of contention. The Trump administration moved a deadline to complete the rule last year, saying it would delay finalizing the rule until May.

Among the specific topics CISA wants comment on during the virtual town halls are proposed sector-based criteria for whom the regulations apply to; how to handle small businesses; how to consider chemical plants in light of a chemical plant security law lapsing; the list of example incidents that would meet the law’s reporting requirements; and how to reduce conflicts with existing regulations.

After the sector-by-sector meetings, CISA would hold general sessions on March 31 and April 2.

One industry source, granted anonymity to speak candidly, said they weren’t aware the additional sessions were coming until Thursday’s Federal Register notice and it “would have been nice” to know it was coming.

They also told CyberScoop they weren’t sure the town halls were what CIRCIA needed right now.

“Industry has already been very vocal about what we think needs to be addressed in the final rule,” the source said. “We want some back and forth, give and take to better understand what CISA may view as its limitations in implementing the rule.

“And to me, a town hall where you’re asking for more input isn’t what we need at this point. We want a dialogue,” they said.

Speaking to reporters at a conference last week about the timeline on CIRCIA releasing a final rule, CISA official Nick Andersen said that “I think that we’ll have some news on CIRCIA in pretty short order in the next couple of weeks, hopefully.” Andersen, executive assistant director for cybersecurity at the agency, said he couldn’t say more at the time on whether CISA would continue the existing rulemaking process or undertake a new one.

The post CISA to host industry feedback sessions on cyber incident reporting regulation appeared first on CyberScoop.

Acting CISA chief says DHS funding lapse would limit, halt some agency work

Another Department of Homeland Security shutdown would hamper the Cybersecurity and Infrastructure Security Agency’s ability to respond to threats, offer services, develop new capabilities and finish writing a key regulation, its acting director told Congress Wednesday.

Some of those activities would continue on a limited basis, while others would halt entirely, acting CISA leader Madhu Gottumukkala testified before the House Appropriations Subcommittee on Homeland Security.

“A lapse in funding would impede CISA’s ability to perform … good work,” he told the panel. “When the government shuts down, our adversaries do not.”

As lawmakers held the hearing, DHS was hurtling toward another potential shutdown as Democrats and Republicans clashed over Trump administration immigration policies and enforcement, with a focus most recently on the massive influx of DHS officers in Minneapolis, where those officers have killed multiple U.S. citizens.

Republicans said at the hearing the testimony should persuade Democrats to fund DHS, since its border operations are largely funded by last year’s budget reconciliation law and a shutdown would mainly harm DHS’s other agencies. Democrats said the hearing was “for show,” as they have put forward proposals to fund the rest of DHS as the immigration debate continues — and as 90% of DHS would continue operating under a shutdown, as the panel’s top Democrat, Henry Cuellar of Texas, asserted.

Gottumukkala said CISA planned to designate 888 of its 2,341 employees as “excepted,” meaning they could continue to work during a shutdown, albeit without pay.

“We will do everything we can to meet our mission during the shutdown,” he said. “Uncertainty and those missed paychecks are a serious hardship.”

CISA has reduced its personnel by a third under the second presidency of Donald Trump.

A shutdown “would delay deploying cybersecurity services and capabilities to federal agencies, leaving significant gaps in security programs,” Gottumukkala said in his written testimony. “CISA’s capacity to provide timely and actionable guidance to help partners defend their networks would be degraded.”

There’s a divide between activities CISA could continue in some capacity versus those they would have to shutter entirely during a funding lapse, he said.

“Limited activities include responding to imminent threats, sharing timely vulnerability and incident information, maintaining our 24/7 operations center, and operating cybersecurity shared services,” Gottumukkala said. “However, CISA would not perform any strategic planning, development of cybersecurity advice and guidance, or development of new technical capabilities.”

There would likely be delays in activities like issuing binding operational directives to federal agencies or completing the already-delayed regulations stemming from the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), the latter of which would require critical infrastructure operators to report major cyber incidents to CISA and would be paused during a shutdown, he said.

Gottumukkala’s testimony is the latest before Congress to focus on personnel at CISA. The chairman of the Appropriations subcommittee, Rep. Mark Amodei, R-Nev., chided Gottumukkala for what he said were delays in CISA providing a reorganization plan to the panel.

“We’ve been professional. We’ve been respectful,” Amodei said. “We expect exactly the same thing in return.”

The post Acting CISA chief says DHS funding lapse would limit, halt some agency work appeared first on CyberScoop.

Critics warn America’s ‘move fast’ AI strategy could cost it the global market

The Trump administration has made U.S. dominance in artificial intelligence a national priority, but some critics say a light-touch approach to regulating security and safety in U.S. models is making it harder to promote adoption in other countries.

White House officials have said since taking office that Trump intended to move away from predecessor Joe Biden’s emphasis on AI safety. Instead, they would allow U.S. companies to test and improve their models with minimal regulation, prioritizing speed and capability. 

But this has left other stakeholders, including U.S. businesses, to work out the rules of the road for themselves.

Camille Stewart Gloster, a former deputy national cyber director in the Biden administration, now owns and manages her own cyber and national security advisory firm. There are some companies, she said, who “recognize that security is performance.”

This means putting governance and security guardrails in place so the AI behaves as intended, access is tightly restricted , and inputs and outputs are monitored for unsafe or malicious activity that could create legal or regulatory risk.

“Unfortunately [there are] a small amount of organizations that realize it at a real, tangible ‘let’s put the money behind it’ level, and there are a number of small and medium organizations, and even some larger ones, that really just want to move fast and don’t quite understand how to strike that balance,” she said Monday at the State of the Net conference in Washington D.C.

Stewart Gloster said she has seen organizations inadvertently put users at risk by giving AI agents too much authority and too little oversight, leading to disastrous results. One company she advised was “effectively DDoSing their customers” with their AI agent, who was “flooding their customers with notifications to the point where they were upset, but they could not stop it, because cutting off the agent meant cutting off a critical capability.”

The Trump administration and Republicans in Congress have made global AI leadership a top national priority. They argue that new regulations for the fast-growing AI industry would inhibit innovation and make U.S. tech companies less competitive. 

Some worry that the GOP’s zeal to boost U.S. AI companies may backfire. Michael Daniel, former White House Cybersecurity Coordinator during the Obama administration, said artificial intelligence regulations in the U.S. remain woefully inadequate to gain broad adoption in other parts of the world, like Europe, where regulatory safety and security standards for commercial AI models are often higher.

“If we don’t take action here in the United States, we may find ourselves…being forced to play the follower, because not everybody will wait for us,” said Daniel, “And I would say that geopolitics are making that even less likely, and it’s making it more likely that others will move faster and more sharply than the U.S. will.”

One recent example: Elon Musk’s xAI is currently under investigation by multiple regulators on the state and international level following the generation of millions of nonconsensual, deepfakes nudes, sexualized photos and Child Sexual Abuse Material of real user photos by its AI tool Grok. Multiple countries have threatened to ban or restrict the use of X and Grok in their countries over the episode.

Musk himself has at times endorsed Grok’s propensity for making controversial or objectionable content, promoting features like “spicy mode” that make the model more offensive and vulgar, including by generating nude deepfakes generated from photos of real individuals.

AI researcher Emily Barnes noted that Grok’s Spicy Mode “sits squarely in a zone where intellectual property jurisprudence, platform governance and human rights frameworks have yet to align.”

“The result is a capability that can mass-produce non-consensual sexual images at scale without triggering consistent legal consequences” in the U.S.,” she wrote.

Daniel is part of a growing chorus of U.S. policymakers – mostly Democrats – who have argued over the past year that strong security and safety guardrails will help U.S.-made AI models compete on the world stage, not hurt them.

Last year, Sen. Mark Kelly, D-Ariz., urged that similar security and safety protections become a core part of how U.S. AI tools are built “not only to ensure the technology is safe for businesses and individuals to use and isn’t leveraged in widespread discrimination or scamming, but also because they can serve as a key differentiator between the U.S. and other competitors like China and Russia.”

“If we create the rules, maybe we can get our allies to work within the system that we have and we’ve created,” Kelly added. “I think we’ll have leverage there, I hope we do.”Stewart Gloster said that in the absence of direction or regulation by the federal government, industry is finding that any rules of the road around ensuring security and reliability will have to come from companies looking to protect their own brand partnering with other, smaller regulatory stakeholders.

“There are a lot of organizations that are contending with this new role that they must play as [the federal] government pushes down the responsibility of security to state government and as they look to industry to drive what innovation looks like,” she said.

While businesses are starting to have those conversations in trade associations and consortia to brainstorm alternatives, “this is not happening generally.”  

What’s more likely is that legal liability for AI developers, organizations and individuals around AI security and privacy failures will be shaped through lawsuits and the court system.

“That’s probably not the way we want it to happen, because bad facts make bad law, which means if it’s litigated in the courts, we’re likely to see a precedent that is very tailored to that set of facts, and that will be a really tough place for us to operate from,” she said.

The post Critics warn America’s ‘move fast’ AI strategy could cost it the global market appeared first on CyberScoop.

Sean Cairncross’ cybersecurity agenda: less regulation, more cooperation

The Trump administration needs help from industry to reduce the cybersecurity regulatory burden and to back important cyber legislation on Capitol Hill, among other areas, National Cyber Director Sean Cairncross said Tuesday.

“You know your regulatory scheme better than I do: Where there’s friction, where there’s frustration with information sharing, what sort of information is shared, the process through which it’s shared,” he said. “It is helpful for us to hear that and have that feedback so that we can address it, engage it and try to make it better.”

The Trump administration is interested in being a partner with industry rather than a “scold,” Cairncross said at an Information Technology Industry Council event. The Biden administration sought to impose more cybersecurity rules on the private sector than prior administrations.

Cairncross also called on industry to help pass the Cybersecurity Information Sharing Act of 2015, which has expired and dealt with short-term extensions in recent months as Congress stalls on what to do with a law that provides legal protections to companies that share cyber threat data with the government and each other.

The Trump administration would like to see the law extended as-is for 10 years.

“What we need from industry is an echo chamber up on the Hill to help make that happen,” he said. “I can go tell people how important this is, or the White House can weigh in, and we have done that. But when the people who are actually affected by this start to weigh in with members, that has an even greater impact.”

Overall, Cairncross wants industry to “show up and engage,” he said, as the administration has done with its forthcoming cybersecurity strategy, something he said would be rolled out “sooner rather than later.”

“Reach out to us,” he urged. “We will certainly be reaching out how we have gone about this strategic piece of this. Just from the outset, we have had a heavy industry engagement side of this and looked for feedback and thoughts. It’s been extremely helpful, and hopefully it has been successful in sending the message that we want to, which is, we are here to do everything we can to partner with industry.”

The post Sean Cairncross’ cybersecurity agenda: less regulation, more cooperation appeared first on CyberScoop.

OMB rescinds ‘burdensome’ Biden-era secure software memo

The Trump administration is rescinding a Biden-era memo that was intended to help agencies buy secure software, with the current Office of Management and Budget saying it relied on “unproven and burdensome” processes.

A former Biden administration official said the move is “the first major policy step back that I have seen in the administration on a cybersecurity front.”

At issue is the 2022 OMB memo titled “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices,” M-22-18. The administration rescinded the memo Friday.

That memo led to the creation of a common “Secure Software Development Attestation Form” for government agencies that contractors had to use to vouch that their software adheres to a set of security practices. Agencies couldn’t buy from software vendors that couldn’t attest to the security of their products.

“Each agency head is ultimately responsible for assuring the security of software and hardware that is permitted to operate on the agency’s network,” OMB Director Russell Vought wrote in a brief memo Friday to agency heads. “There is no universal, one-size-fits-all method of achieving that result. Each agency should validate provider security utilizing secure development principles and based on a comprehensive risk assessment.”

Nick Leiserson, who served as assistant national cyber director for cyber policy and programs under Biden’s Office of the National Cyber Director, told CyberScoop that rescinding the 2022 memo was a step backward because the memo was meant to use government purchasing power to influence the market, and its repeal “is not good for the security of government systems and for the software that’s used throughout the whole U.S. economy.”

The memo stemmed from the first Biden administration executive order, a response to the major SolarWinds breach that led to agencies being penetrated by alleged Russian hackers, among other notable cyber incidents.

Rescinding it leaves nothing in its place, said Leiserson, now senior vice president for policy at the Institute for Security and Technology, at a time of rising exploitation of software vulnerabilities.

Friday’s decision doesn’t ban everything from the 2022 memo. Vought said agencies could use the common attestation form if they choose; agencies must “maintain a complete inventory of software and hardware and develop software and hardware assurance policies and processes that match their risk determinations and mission needs”; and that agencies could adopt contract terms that require software makers to provide a list of software ingredients, known as a software bill of materials, upon request.

Lieserson disputed the idea that the 2022 memo was burdensome, based on government estimates that the common form would consume three hours and 20 minutes of paperwork. And Leiserson said rescinding it goes against the Trump administration’s goal of deconflicting a tangle of cybersecurity rules: In the place of one common form for all contractors, agency-by-agency forms will increase the regulatory burden.

The Trump administration had previously signaled a desire to roll back other cybersecurity directions for agencies from President Joe Biden.

The post OMB rescinds ‘burdensome’ Biden-era secure software memo appeared first on CyberScoop.

Hill warning: Don’t put cyber offense before defense

Amid budding sentiment in the Trump administration and Congress to expand offensive cyber operations, some lawmakers and experts are warning that the United States needs to get its defenses in order before going too far down that road.

A House Homeland Security subcommittee on Tuesday examined how to deter foreign cyberattacks, with an emphasis on the role U.S. attacks could play in countering them. One long-running concern about improving U.S. offense is how it might provoke further attacks.

“I’m concerned we’re putting the cart before the horse, when we have not had a hearing on why the [Cybersecurity and Infrastructure Security] Agency has lost one-third of its workforce in the last year,” the top Democrat on the full committee, Bennie Thompson of Mississippi, said. “We ought to be cautious about pursuing an approach involving the use of offensive cyber tools that could result in retaliation or escalation if we’re not in a position to help defend U.S. networks.”

Other panel Democrats invoked a sentiment from sports about the importance of defense over offense. “Both are still important,” Rep. James Walkinshaw, D-Va., said during the hearing of the Cybersecurity and Infrastructure Protection Subcommittee.

Emily Harding with the Center for Strategic and International Studies, a D.C.-based think tank, testified that as the United States takes steps toward a more aggressive posture in cyberspace, it also needs to fund important defensive upgrades for federal government networks.

The chair of the subcommittee, Andy Ogles, R-Tenn., said that while defense was important, “defense alone is not sufficient,” and that “deterrence in cyberspace doesn’t exist without operational cyber offensive capabilities.”

The private sector could have a bigger role to play in boosting the country’s offense, since cybersecurity companies, tech providers and other businesses often have the best vantage point on attacks as both victims and investigators, Ogles said.

But much of the kind of things companies could do to bolster offense “exists in legal and policy gray space,” he said. “Companies face uncertainty about liability, retaliation and regulatory risk.”

A hybrid approach with private sector companies supporting government offensive operations rather than directly carrying them out generated the broadest support at the hearing. Harding said Congress could provide legal protections to companies in those circumstances.

CISA should play a key role in coordinating any public and private sector offensive activity, said Drew Bagley, chief privacy officer at CrowdStrike.

“This committee can ensure that CISA is properly focused and resourced to perform this mission,” he said in written remarks. “From an oversight perspective, you can ensure it has authorities, talent and capabilities to maximize its impact.”

The post Hill warning: Don’t put cyber offense before defense appeared first on CyberScoop.

FCC finalizes new penalties for robocall violators

The Federal Communications Commission finalized new financial penalties for telecoms that submit false, inaccurate or late reporting to a federal robocalling system.

The new regulations, which go into effect Feb. 5, will require providers to recertify every year that their information is accurate in the Robocall Mitigation Database (RMD). It would also impose fines on offenders, including $10,000 for submitting false or inaccurate information and $1,000 for each entry not updated within 10 business days of receiving new information.

The commission also added two-factor authentication cybersecurity protections to access the database and directed its Wireline Competition Bureau to establish a new channel for reporting on deficient filings.

Those deficiencies “range from failures to provide accurate contact information to submission of robocall mitigation plans that do not in any way describe reasonable robocall mitigation practices,” the FCC wrote in a final rule posted this week in the Federal Register.

The FCC already requires voice service providers to verify and certify the identities of their callers through the RMD. The database is designed to help regulators and law enforcement track and prevent call spoofing, a frequent tactic of illegal robocallers, and hold providers accountable for the identities of callers and phone numbers that use their networks.

But America’s telecommunications networks are vast and decentralized, comprised of both massive companies like Verizon and AT&T and smaller telecoms and voice-over-internet-protocol (VoIP) providers. Calls often hop from one provider network to another, and verification can get lost or overlooked in the chain of custody.

Historically, federal regulators neither verified nor enforced the accuracy of those filings. Their effectiveness was called into question two years ago, when a political consultant used a voice-cloning tool to impersonate then-President Joe Biden in fake voicemails to New Hampshire voters, spoofing the number of a prominent state Democratic ally. The carrier that transmitted those calls, Lingo Telecom, had nonetheless verified the caller’s identity at their highest level of confidence.

The FCC asked for public feedback on whether to treat violations as minor paperwork errors, which typically carry smaller fines, or as evidence of more serious misrepresentation or lack of candor on the part of the provider. Telecom trade associations opposed fines for false or inaccurate filings unless filers were first granted an opportunity to correct the error or the FCC finds the information “willfully” inaccurate.  State attorneys general and robocall surveillance platform ZipDX urged the FCC to take a stricter approach  arguing that false filings “significantly undermines the Commission’s efforts to curb illegal robocalls.”

“The State AGs and ZipDX each express strong support for treating the filing of false or inaccurate information in the Robocall Mitigation Database akin to misrepresentation/lack of candor, arguing that such actions should elicit the statutory maximum penalty,” the commission wrote.

The FCC ultimately searched for a middle ground, concluding that a false filing in this case “warrants a significantly higher penalty than the existing $3,000 base forfeiture for failure to file required forms or information” but lower than the statutory maximum.

The post FCC finalizes new penalties for robocall violators appeared first on CyberScoop.

AI doesn’t care if it’s in California or Texas. It just runs.

Artificial intelligence is evolving faster than regulators can keep up. In the absence of federal guidance, states have taken matters into their own hands. California’s S.B. 53 is only one example of a state attempting to shape how AI is built and used. Although these laws are well-intentioned and help protect consumers and promote transparency on a small scale, the problem is that these laws treat AI as if it were only a local issue. In the grand scheme, AI is borderless, cloud native, and woven through global infrastructure. It simply does not follow state lines.

In the 2025 legislative session, every state in the country, along with Puerto Rico, the Virgin Islands, and Washington, D.C., introduced proposals related to AI. This year alone, 38 states adopted or enacted roughly 100 measures. Yet these laws rely on different definitions and different compliance and enforcement approaches. The result is a patchwork regulatory landscape: as complex as the technology itself, but without the consistency and interoperability needed to govern AI effectively.

The accelerated expansion of state-level regulation highlights the problem’s growing urgency. It also points to a widening disconnect: AI is advancing rapidly, and new laws are proliferating, but coordination hasn’t kept pace. As a result, policy and security leaders are navigating a fast-paced regulatory landscape without a clear, unified direction.

The geographic fallacy of state-level AI laws

A fragmented regulatory scene creates real challenges for organizations that want to build or use AI responsibly. Each new state law introduces its own set of requirements for testing, reporting, documentation, or oversight. Security and risk teams then must map every workflow against all of the different (and sometimes conflicting) requirements. Even the basic definition of what counts as AI varies across states. The same system that may be regulated in one jurisdiction might be unregulated in another.

Large enterprises can usually keep up. With dedicated legal and compliance teams—and the budget to match—they can absorb the cost of audits, system changes, and frequent policy updates. Small and midsize companies don’t have this luxury. Early-stage AI innovators now face an unnecessary choice: devote limited resources to tracking and meeting dozens of regulatory obligations or slow development and risk falling behind. Even when well-intentioned, fragmentation becomes a gatekeeper—creating an environment where only the largest companies can operate at scale. This distorts the market by concentrating innovation in the most well-funded firms and making it harder for smaller teams to break through. The result is an uneven AI ecosystem shaped more by regulatory barriers than by technical capability.

The growing divide

The effects of widespread, conflicting regulations and expectations extend far beyond mere inconvenience. Fragmentation weakens security, reduces public trust, and increases risk across the full AI supply chain. When organizations must focus primarily on compliance, safety and ethics become secondary. Teams spend more time tracking state-level requirements than building the controls that matter most—creating potential gaps in oversight, testing, and transparency.

Regulatory inconsistencies also let large organizations gravitate toward jurisdictions with the most favorable rules. In practice, they can design their practices around minimum standards, rather than the strongest ones. Smaller companies cannot do this; to stay compliant, they often have to meet multiple sets of requirements at once. This uneven burden puts them at a disadvantage and creates a multi-track environment in which safety practices vary widely.

Organizations invite risk with inconsistent standards. In cybersecurity, fragmented controls are never effective. AI security is no different. Attackers exploit the weakest point. When rules vary widely, so do protections, which leaves openings for misuse, bias, faulty automation, and other cascading failures in interconnected systems. A world where AI safety depends on geography is not a world that advances trust.

The only sustainable path

A unified federal framework is required to establish clear expectations for transparency, accountability, and responsible innovation. AI operates across borders, and oversight must operate across borders as well.

The window for federal leadership is closing, and the economic consequences of inaction are becoming harder to ignore. As AI advances faster than state legislatures can respond, the patchwork of rules becomes more complex and more burdensome—especially for startups and smaller innovators who lack the resources to navigate it. Without swift national guidance, the U.S. risks hard coding a system where only the largest enterprises can afford to compete, stifling innovation long before consistent protections are ever put in place.

Advocacy organizations such as Build American AI play a valuable role in advancing this shift. Groups like this are rare, and they shouldn’t be. Clear federal guidance can support innovation while ensuring meaningful safeguards. Consistent national standards would reduce ambiguity, close regulatory loopholes, and give organizations a clear set of expectations that govern their work.

Such consistency benefits security teams, policymakers, and developers across the ecosystem. A unified approach enables organizations to invest in the protections that matter rather than diverting attention toward managing conflicting requirements. It encourages competition by allowing smaller companies to focus on innovation instead of compliance triage. It also raises the overall standard for safe AI development.

Transparency, governance, and a path forward

A more secure and consistent AI landscape begins with federal alignment. A single national framework that is capable of efficiency and flexibility would replace the state-level requirements that currently conflict and delay AI development. This would prevent situations where an identical AI model faces one set of obligations in California and an entirely different set in Florida. With a unified baseline, organizations could invest in long term safeguards rather than repeatedly adjusting to shifting geographic rules.

Internal governance plays an equally important role. An ethics-centered approach ensures that organizations are building systems that are safe even when regulations are unclear or incomplete. This includes responsible data practices, model testing, and ongoing issues such as bias drift or inaccurate outputs. A team designing an AI tool for patient intake, for example, needs a clearly defined process for detecting, documenting, and resolving errors. These internal controls strengthen both security and trust.

Transparency and interpretability round out the foundation for responsible AI. Systems that allow teams to understand how decisions are made, make it easier to catch misuse or unintended behavior. A fraud detection model that shows which signals influence its decisions is easier to audit and fix than a “closed box” model that doesn’t. Organizations that are early adopters of explainable and auditable tools will be better prepared for future oversight and better equipped to respond when risks emerge.

Aligning oversight with the reality of AI

A unified federal approach to AI could provide benefits across the entire AI ecosystem. Innovation can expand because smaller organizations are no longer hindered by conflicting state requirements. Security would improve because consistent expectations eliminate weak links and close opportunities for misuse. Trust will grow as transparent interpretable systems become the norm rather than the exception.

AI does not recognize borders. Regulation should reflect that reality. Unified guidance does not slow the evolution of technology. It creates a stronger, safer, and more sustainable environment that supports responsible innovation for everyone.

Kevin Kirkwood is the chief information security officer at Exabeam.

The post AI doesn’t care if it’s in California or Texas. It just runs. appeared first on CyberScoop.

The Congressional remedy for Salt Typhoon? More information sharing with industry

When news broke approximately a year ago that Chinese hackers had systemically penetrated at least nine major U.S. communications networks, the level of alarm from policymakers was clear.  

At a hearing held Tuesday by the Senate Committee on Commerce, experts offered differing assessments of the threat. While intelligence officials have characterized the Salt Typhoon operation’s targeting of high-level U.S. politicians as falling within the bounds of traditional geopolitical espionage, other experts argued that the unprecedented scale of  China’s hacking activity in the U.S. telecom sector —  and the country’s pursuit of broader, long-term access — constitutes a more systemic attack on critical infrastructure that poses a serious threat to national security.

Jamil Jaffer, executive director of the National Security Institute at George Mason University, noted before the committee that “the reality is that our adversaries don’t know where our red lines are” when it comes to intrusions like Salt Typhoon, because the U.S. has failed to effectively communicate its boundaries to adversary nations in cyberspace.

“They don’t know what we would do if those red lines are crossed, and to the extent that we do enforce them…in the cyber or telecommunications domain, we do it in a way that other adversaries can’t see,” said Jaffer.

Jaffer also criticized the U.S. government for both not doing enough to stop the attack ahead of time and relying too heavily on regulation to strengthen telecommunications cybersecurity. Instead, he advocated for closer voluntary cooperation and more information sharing between government and industry.

Senate Commerce Committee Chair Sen. Ted Cruz, R-Texas, and telecommunications subcommittee chair Sen. Deb Fischer, R-Neb., both endorsed the FCC’s recent decisions to withdraw a pair of new regulations issued by the agency in the waning days of the Biden administration. The first would have interpreted a decades-old law to say that telecoms have a legal obligation to protect their communications from unauthorized foreign interception. The second would have required telecoms to submit annual verification of their cybersecurity plans to the FCC.

FCC Chair Brendan Carr called those rules rushed and ineffective. He also said they were unnecessary, citing extensive conversations between the FCC and industry that had already produced voluntary cybersecurity improvements across the sector.

Cruz expressed support for the FCC’s decision, saying the rules would have forced telecoms to “chase the false security of compliance checklists instead of engaging in real-world threats” and divert resources from “the necessary partnerships and response capabilities that actually stop intrusions.”

“This [problem] needs foresight and agility, and it doesn’t come from imposing outdated checklists and top down regulations, it arises from a strong partnership between the private sector and government, working together to detect and deter attacks in real time,” said Cruz.

But that view was directly contradicted by a former FCC official at the hearing.

Debra Jordan, former chief of the commission’s Public Safety and Homeland Security Bureau, told lawmakers that the rules put out in January were an attempt by the FCC to “lean forward” and leverage flexible cyber standards rather than “sit back and wait for the next attack to happen.”

While Carr, Cruz and Fischer all cited increased cooperation with industry as sufficient, Jordan noted that the FCC does not cite any process by which providers are actually held accountable to meet specific commitments.

“From my experience as bureau chief, I’m not convinced that providers will take sufficient and sustained actions in the wake of Volt and Salt Typhoon without a strong verification regime,” she said.

Later, Sen. Maria Cantwell, D-Mass., noted that both AT&T and Verizon declined her request earlier this year for additional documentation detailing their response to the Salt Typhoon breach.

“Hardly a transparent effort,” Cantwell said. “I believe the American people deserve to know whether China is still in our telecom networks.”

Other FCC commissioners have also questioned the extent of the agency’s engagement with industry over Salt Typhoon. Last month, FCC Commissioner Anna Gomez told CyberScoop that she has not witnessed any robust discussions with telecom companies over the past year, adding that only evidence she had of such conversations came from Carr’s statements.

She also lamented that the FCC’s withdrawal of telecom cybersecurity regulations would eliminate “the only meaningful regulatory response to Salt Typhoon that I’ve seen.

Carr, Cruz and Fischer all touted existing laws and regulations requiring the removal and replacement of telecommunications equipment from Chinese companies like Huawei and ZTE as evidence the government has taken significant action to address the threat.

But Chinese telecommunications equipment does not appear to have played any role in Salt Typhoon’s intrusions, according to public officials who have said the hackers mostly relied on the poor state of cybersecurity across the telecom industry. Cantwell pointed out that the hackers gained access to telecom networks through basic weaknesses like unpatched vulnerabilities that have been public for years, weak passwords and lack of multifactor authentication.

Sen. Ben Ray Luján, D-N.M., was deeply critical of the FCC’s regulatory removal. He noted that the Senate Commerce Committee held a hearing on Salt Typhoon’s intrusions last year and has done almost nothing since to secure telecom networks, while the FCC was trading away its regulatory power for pinky promises from industry.

“The FCC stripped these protections away, replacing them with voluntary pledges and handshakes with companies whose networks have already proven themselves vulnerable to data breaches,” he said. “To put it plainly, these companies are basically leaving their front doors unlocked after a data break in, and the FCC has decided to take their word when they promise they’ve installed deadbolts and security cameras.”

Gomez, Jordan, Luján and Jaffer all described Salt Typhoon as an active threat to U.S. telecommunications networks and critical infrastructure, and expressed concern over how the vulnerabilities exploited by the group could be leveraged to disrupt or intercept vital U.S. emergency communications.

“We can see that it’s not just the major carriers,” said Lujan. “I’m also concerned that schools, hospitals, libraries, police departments and emergency responders are all exposed and do not have the resources to defend themselves against foreign adversaries.”

The post The Congressional remedy for Salt Typhoon? More information sharing with industry appeared first on CyberScoop.

The quiet revolution: How regulation is forcing cybersecurity accountability

Cybersecurity headlines still focus on the headline-grabbing moments, whether it’s the latest breach, a zero-day exploit, or an eye-catching product launch. However, beneath the surface noise, a quieter but more profound transformation is taking place—driven by regulations that are changing the way organizations think about, approach, and communicate on security.”

Across the globe, new standards and frameworks, including the EU’s Digital Operational Resilience Act (DORA) and the U.S. government’s Secure-by-Design Principles, as well as the Securities and Exchange Commission’s enhanced disclosure rules, are shifting accountability from aspiration to expectation. For security leaders, these are more than checkboxes. They’re the building blocks for a cultural revolution that rewards transparency, enforces architectural rigor, and reshapes how teams communicate risk from the SOC up to the C-suite.

Regulation as a cultural driver

For years, compliance was viewed as the bureaucratic, paperwork-heavy aspect of cybersecurity. It included an audit here, a checkbox there, and then it was back to business. Today’s frameworks are evolving to ask more complex questions. They no longer focus solely on whether basic security measures are in place, but challenge organizations to demonstrate deeper levels of readiness and accountability. For example, can you show that you have real-time awareness of what’s happening in your environment? Can you provide evidence that your systems were designed with security in mind and not with patches after vulnerabilities were discovered? And when a breach does occur, can you clearly and credibly explain how it was handled?

Statistics reinforce this shift. For example, law firm Greenberg Traurig published in February 2025 that, since April 2024, 41 companies have disclosed cybersecurity incidents via Form 8-K in the U.S., with 15 of those filings under the mandatory Item 1.05 (material incidents). 

Taking a broader perspective, the average cost of a data breach has reached $4.88 million, a 10% year-over-year increase, according to DeepStrike, a company that provides penetration testing services. This illustrates that disclosure and accountability are rising in significance, and regulators are signaling that silent or slow responses are no longer acceptable.

This shift is less about bureaucracy and more about culture. It’s forcing teams to internalize accountability and to treat transparency, architecture, and communication as everyday disciplines rather than once-a-year compliance events.

From compliance to everyday behavior

Organizations that are successfully adapting to today’s evolving security landscape are embracing fundamental cultural shifts. One of the most significant changes is a growing emphasis on transparency. As breach disclosure rules and resilience mandates redefine incident response, the goal is credible communication versus quiet containment. 

Another key shift is the increasing role of architecture in driving security outcomes. The growing “secure by design” movement is making cybersecurity a core engineering principle. This means building systems that prioritize visibility, centralizing logs for better monitoring, and maintaining a comprehensive understanding of assets. These foundational practices are what separate resilient organizations from those that are vulnerable.

Equally important is the move toward greater cross-team accountability. Today’s regulatory environment demands multidisciplinary cooperation. Security cannot operate in isolation from compliance, engineering, or communications. In this approach, regulation forces legal, technical, and operational alignment.

Practical steps to get ahead

Rather than scrambling to satisfy every new rule, forward-looking leaders can use regulation as a blueprint for maturity. These are three practical strategies:

The first step is to build compliance into your design process. Start by including regulatory requirements in product plans and infrastructure from the outset—this is far cheaper and more effective than retrofitting. For example, set up centralized logging and encryption at the architecture stage and use security checklists during sprints. Involve legal teams early to clarify reporting obligations, avoiding surprises later. Treat compliance as an integral part of development, not just a final check.

Next, focus on security basics. Core areas like employee training, asset inventory, vulnerability management, and centralized logging are essential. Reliable asset inventories help track systems and ownership, while secure configurations and automated patching reduce risks. Tabletop exercises with leadership and legal teams build preparedness. Regulators increasingly expect these fundamentals to be in place and regularly tested.

Finally, measure metrics that truly matter. Instead of tallying alerts, track things like Mean Time to Detect (MTTD), Mean Time to Disclose (MTTD), secure configuration rates, logging coverage, and the speed of vulnerability response. Use these insights for board reporting and to demonstrate improving security maturity.

Finally, leaders should build a culture that prepares for failure by asking, “If we were breached tomorrow, what would fail?” This reverse-engineering mindset promotes proactive ownership and is a powerful cultural signal that accountability is everyone’s job.

Accountability becomes an advantage

What this quiet revolution yields is a new definition of maturity. This does not require perfection, but accountability. Organizations, their leaders, and their security teams will still face incidents. However, what is changing is the expectation of a response. In this culture, transparency and preparedness become competitive differentiators rather than risks.

As I’ve laid out, regulation is accelerating this shift. The most important story in cybersecurity today is not about the next breach, but how organizations respond and evolve in light of accountability. It’s a transformation of culture, and the leaders who embrace it will find themselves ahead of the curve.

Robert Rea is Chief Technology Officer at Graylog, where he leads product and engineering strategy. 

The post The quiet revolution: How regulation is forcing cybersecurity accountability appeared first on CyberScoop.

WEBCAST: GDPR – Spring Storm Warning

CJ Cox// Spring storms are often more dangerous and unpredictable than winter storms. The GDPR looks to be no exception. The General Data Protection Regulation is a universal law brought […]

The post WEBCAST: GDPR – Spring Storm Warning appeared first on Black Hills Information Security, Inc..

❌