Reading view

There are new articles available, click to refresh the page.

The missing cybersecurity leader in small business

The average cyberattack costs for a small- or medium-size business is more than $250,000. The salary for a chief information security officer (CISO) is about the same, pulling in between $250,000 and $400,000, according to the annual 2026 CISO Report from Sophos and Cybersecurity Ventures. Small- and medium-size businesses (SMBs) know they cannot afford the salary, so they roll the dice, hoping they will not be attacked. This is a dangerous gamble that these businesses, which make up the backbone of the American economy, should not have to take. A virtual (vCISO) or fractional CISO (fCISO) can provide a practical solution.

As the American economy goes digital, SMBs now rely on the same building blocks as big enterprises — cloud services, payment systems, remote access, customer data, and other third-party vendors.  But without senior cyber leadership, cybersecurity often becomes a patchwork of tools, checklists, insurance paperwork, and whatever guidance a vendor offers. That may get these companies through a questionnaire; it will not build real resilience. Nearly half, all reported cyber incidents, which is projected to cost the global economy $12.2 trillion annually by 2031, involve smaller firms.

The threat is growing in both size and sophistication. Adversaries are deploying AI to automate reconnaissance, develop malware, and run phishing campaigns at scale.  This reduces the cost and skill needed to target smaller firms at volume. Adversaries are also collecting encrypted data with the intent to decrypt it later when they have access to large enough quantum computers. SMBs in defense, healthcare, and financial supply chains often hold sensitive credentials that provide access into larger enterprise environments, but most are not prepared to adopt quantum-resistant encryption.

SMBs generally understand they face cyber risk. The real gap is leadership: someone who can turn technical vulnerabilities into business decisions, set priorities, brief executives, prepare for audits, and hold vendors accountable. For more SMBs, hiring a full-time CISO is financially unrealistic.

A Virtual CISO provides remote, on-demand cybersecurity leadership and advice, typically supporting several organizations at the same time. A fractional CISO is a dedicated, part-time executive who is more deeply integrated into one organization’s governance, security planning, and day-to-day operations. Both models give smaller organizations access to senior-level cybersecurity expertise in a flexible, more affordable way than hiring a full-time CISO.

Washington should make it easier for SMBs to hire fractional cybersecurity leaders, because the private market is not closing this gap on its own. The Cybersecurity and Infrastructure Security Agency (CISA) and the Small Business Administration (SBA) could help by publishing buyer guidance: vetted criteria for evaluating providers, example scopes of work and deliverables, and real-world case studies that show SMB owners what a high-quality vCISO or fCISO engagement should look like.

Clear guidance matters because many smaller firms cannot easily tell the difference between true cybersecurity leadership and a tool reseller, compliance-only consultant, or a generic managed services contract. Any vetted provider criteria should emphasize proven experience building and running security programs, independence from vendor incentives and product quotas, and the ability to tie security investment to real business risk, not just a list of certifications. Model scopes of work should also spell out the basics every engagement should deliver: an initial risk assessment, a prioritized remediation roadmap, and simple metrics that show whether security is improving over time. Without clear buyer criteria, federal efforts could end up funding low-quality services that add cost and paperwork without making companies safer.

The National Institute for Standards and Technology (NIST) should recognize these CISO models in its SMB-focused Cybersecurity Framework guidance. That would help smaller firms turn the framework’s Govern, Identify, Protect, Detect, Respond, and Recover functions into a clear, accountable leadership structure. This would make these roles less abstract: the point is not merely providing advice, but taking executive-level ownership of risk priorities, vendor oversight, incident readiness, and communication with the owner or board.

Congress and the Treasury Department should consider targeted tax incentives or credits for qualified cybersecurity leadership services, tied to measurable risk-reduction outcomes. Eligible activities could include completing a risk assessment, building a incident response plan, conducting vendor security reviews, running employee training, and producing a remediation roadmap. SMBs often defer cybersecurity because every dollar competes with payroll, inventory, and growth. A targeted incentive would make security leadership easier to justify as a business investment rather than an optional add-on.

Federal acquisition officials should require contractors that handle sensitive government data to show it has executive-level cybersecurity oversight, whether it is full-time, virtual, or fractional, and should extend that expectation down to relevant subcontractors and suppliers. This is necessary because SMBs serve as entry points into defense, healthcare, financial, and critical infrastructure supply chains.

Finally, CISA and the SBA should support vCISO- and fractional-CISO-led workforce training. Employees improve security when training comes with leadership, regular reinforcement, and clear accountability, not just annual awareness training. The aim is not to turn every SMB into a Fortune 500 security shop. It should be to give smaller firms access to the leadership they need before the next incident forces the issue.

Georgianna Shea, who is a Doctor of Computer Science, is chief technologist at the Foundation for Defense of Democracies’ Center on Cyber and Technology Innovation and its Transformative Cyber Innovation Lab, where Cason Smith served as a summer 2025 intern. Cason is studying integrated information technology at the University of South Carolina.

The post The missing cybersecurity leader in small business appeared first on CyberScoop.

Space Force official touts AI’s impact on cyber compliance

Seth Whitworth, who is both acting Associate Deputy Chief of Space Operations for Cyber and Data and acting chief information security officer, said he believes AI tools are shifting the way defenders review cyber risk, both for individual systems and more holistically throughout an enterprise.  

In particular, Large Language Models can be used to systematically implement fixes for the smaller but critical weaknesses that have allowed state-sponsored hackers and cybercriminals to get inside victim networks and live off the land.

“Our adversaries are not looking for the massive cybersecurity vulnerabilities – we’re actually pretty good at [defending] that,” said Whitworth Tuesday at AI Talks, presented by Scoop News Group. “They’re looking for a misconfiguration, a failed update, a tiny little thing that allows them an entry point into a very connected network.”

Many of these basic cyber hygiene problems tend to fall under existing compliance programs, but it can take more than legal mandates to fix them. Many enterprise IT networks – particularly older ones – build up technical debt over time, leading to forgotten systems, hidden routers and other forms of shadow IT that get more insecure over time.

Cybersecurity experts say agents and the Large Language Models that power them – which operate in perpetuity 24/7, – are particularly well-suited to finding these smaller flaws and quickly exploiting them.

But Whitworth argued that the same technology can be used to reshape how organizations measure and track cyber compliance, from a sluggish box-checking exercise to something more nimble and substantive. He claimed that Space Force’s internal process for obtaining Authorities to Operate and other formal security certifications used to take 3-18 months. Now, it “can now be done in weeks and days.”

That in turn can empower program managers to “pull in all of that massive amount of data, allow the AI – who doesn’t get tired, who doesn’t miss patterns, who doesn’t miss these components – to churn on those items and them deliver something” that can inform real-time changes to cybersecurity, he said.

Whitworth also acknowledged the “fear” that many organizations still have around the use of AI, as well as lingering concerns about some of the technology’s enduring limitations like hallucinations and data poisoning. He said he still gives AI-generated outputs “extra scrutiny, because I haven’t seen the trusted validation” yet.

But he also said he gets more valuable insight on the Space Force’s holistic cyber risk from using Large Language Models than he does from other security control assessments, which tend to narrowly focus on the risk of single systems or assets in isolation.

“We are operating in a highly connected, highly orchestrated world, and so moderate risk that’s accepted in one program immediately becomes moderate risk that is accepted in another program,” said Whitworth. “AI can take that whole picture and understand that when this system change impacts this system, it also impacts this [other] system.”

The post Space Force official touts AI’s impact on cyber compliance appeared first on CyberScoop.

BrowserGate: Claims of LinkedIn ‘Spying’ Clash With Security Research Findings

Claims that “Microsoft is running one of the largest corporate espionage operations in modern history” face scrutiny as researchers analyze LinkedIn’s browser extension probing

The post BrowserGate: Claims of LinkedIn ‘Spying’ Clash With Security Research Findings appeared first on SecurityWeek.

Rapid7 Completes BSI C5 Type 2 Examination: Stronger Cloud Security for DACH Organizations

If you're a security leader operating in Germany, Austria, or Switzerland, you already know that compliance isn't a checkbox. It's a competitive differentiator. Rapid7 has completed BSI C5 Type 2 attestation for the Rapid7 Command Platform, including Threat Command, and it's a milestone worth unpacking.

This isn't just a badge on a webpage. It's proof that our security controls work, not just on paper, but in practice, over time.

What is BSI C5 and why does it matter?

The Cloud Computing Compliance Criteria Catalogue (C5) was developed by Germany's Federal Office for Information Security (BSI). It sets some of the most rigorous cloud security standards in the world, covering everything from data protection to operational transparency.

A Type 2 attestation is the gold standard within that framework. Unlike a point-in-time audit, Type 2 validates that security controls aren't just well-designed, but that they're actively working consistently over a sustained period. It's the difference between a security promise and a security proof.

For organizations in the DACH region, C5 is more than a nice-to-have. It's a procurement requirement for German federal agencies, critical infrastructure operators, healthcare institutions, and financial services firms. If you're operating in any of these sectors, your cloud providers need to meet this bar. Rapid7 now does.

BSI C5 Type 2 and your cloud security strategy

Whether you're evaluating security vendors, managing compliance obligations, or looking to strengthen your organization's risk posture, the question is the same: How do you know your cloud security provider actually does what it says?

BSI C5 Type 2 attestation answers that question. It's independent, rigorous, and sustained over time. While rooted in German regulatory requirements, C5 is increasingly recognized as a benchmark for secure cloud operations across Europe. It's one of the clearest signals that a cloud provider has the operational maturity to handle sensitive environments.

The Rapid7 Command Platform unifies exposure management with detection and response, giving security teams clear visibility across their attack surface. Threat Command extends that protection further, identifying and helping remediate threats across the clear, deep, and dark web. Both are now independently validated against one of the world's toughest cloud security frameworks.

Why independent validation of security controls matters

Trusting a security vendor shouldn't require a leap of faith. Independent validation exists so you have the evidence to make that call with confidence. This attestation reflects our continued investment in meeting the highest security standards for customers across Germany and the wider European market. Rapid7 has achieved a milestone that speaks directly to the conversations had every day with public sector and enterprise organizations who need more than a promise. 

They need proof that a security provider's controls have been tested, verified, and proven to hold up over time. That's the kind of assurance that matters when the stakes are high.

Ready to see the Command Platform in action? Visit Rapid7.com for a free trial.

If consequences matter, they should apply to vendors, too

Washington has rediscovered consequences. Just not consistently.

The March 6 executive order rests on a simple, correct idea: cyber-enabled fraud persists because it is profitable, scalable, and too often tolerated. So the government’s answer is to raise the cost. More coordination. More disruption. More prosecutions. More diplomatic pressure on the states that shelter these operations.

Good.

But weeks ago, an OMB Memo rescinded earlier federal software supply chain memos issued during the Biden administration. In practice, that pulled back from the prior attestation-centered model and made tools like the Secure Software Development Attestation Form and SBOM requests optional rather than durable expectations.

Put plainly, we are getting tougher on the people exploiting digital systems while getting softer on the conditions that make those systems so easy to exploit.

The executive order gets something important right. Cyber-enabled fraud is not a collection of random online annoyances. It is an industrialized form of predation: ransomware, phishing, impersonation, sextortion, and financial fraud that’s run as repeatable business models, often transnational and sometimes protected by permissive states. The order responds with a more centralized federal posture built around disruption, coordination, intelligence sharing, prosecution, resilience, and international pressure.

That is directionally correct. Criminal ecosystems do not retreat because we publish better guidance. They retreat when the cost of doing business rises.

But then we arrive at software.

The critique of the old federal assurance regime is not entirely wrong. Compliance can become theater. Bureaucracies are very good at turning legitimate security goals into rituals of form collection and checkbox management. Some skepticism was warranted. OMB says as much explicitly, arguing the prior model became burdensome and prioritized compliance over genuine security investment.

Still, the failure of bad compliance is not proof that accountability itself was the problem.

That is where the logic breaks. The administration is clearly willing to believe that criminal actors respond to deterrence. It is willing to use prosecutions, sanctions, visa restrictions, and coordinated pressure downstream. But upstream, where insecure technology shapes the terrain those criminals exploit, the theory suddenly changes. There, we are told to trust discretion. Local judgment. Flexible, risk-based decisions.

Sometimes that is wisdom. Often it is just a more elegant way of saying no one wants a hard requirement.

This is also why my own position has not changed. In a post I wrote in 2024, I argued that the industry did not need softer expectations or another round of polite encouragement. It needed more concrete action and consequences strong enough to change incentives. The problem was never that we were demanding too much accountability. The problem was that insecure software remained too cheap to ship.

That is the deeper issue. Cybercrime at scale does not thrive only because criminals exist. It thrives because the environment rewards them. Weak identity systems, brittle software, sprawling dependency chains, poor visibility, and diffuse accountability all make predation cheaper. The people who ship avoidable risk rarely absorb the full cost of it. Everyone else does.

So these two policy moves, taken together, reveal something uncomfortable. The government seems to believe in consequences for cybercriminals, but not quite in consequences for insecure production. It wants deterrence for the scammer, but discretion for the supplier.

A coherent cyber strategy would do both. It would aggressively disrupt criminal networks and also create meaningful pressure for secure-by-design production and procurement. It would recognize that punishing attackers matters, but so does changing the terrain that keeps making attack profitable.

The administration is right about one thing: cybercrime will not shrink until the costs of predation rise.

The unanswered question is why that logic should stop at the edge of the scam center.

Brian Fox is the co-founder and CTO of Sonatype.

The post If consequences matter, they should apply to vendors, too appeared first on CyberScoop.

AI doesn’t care if it’s in California or Texas. It just runs.

Artificial intelligence is evolving faster than regulators can keep up. In the absence of federal guidance, states have taken matters into their own hands. California’s S.B. 53 is only one example of a state attempting to shape how AI is built and used. Although these laws are well-intentioned and help protect consumers and promote transparency on a small scale, the problem is that these laws treat AI as if it were only a local issue. In the grand scheme, AI is borderless, cloud native, and woven through global infrastructure. It simply does not follow state lines.

In the 2025 legislative session, every state in the country, along with Puerto Rico, the Virgin Islands, and Washington, D.C., introduced proposals related to AI. This year alone, 38 states adopted or enacted roughly 100 measures. Yet these laws rely on different definitions and different compliance and enforcement approaches. The result is a patchwork regulatory landscape: as complex as the technology itself, but without the consistency and interoperability needed to govern AI effectively.

The accelerated expansion of state-level regulation highlights the problem’s growing urgency. It also points to a widening disconnect: AI is advancing rapidly, and new laws are proliferating, but coordination hasn’t kept pace. As a result, policy and security leaders are navigating a fast-paced regulatory landscape without a clear, unified direction.

The geographic fallacy of state-level AI laws

A fragmented regulatory scene creates real challenges for organizations that want to build or use AI responsibly. Each new state law introduces its own set of requirements for testing, reporting, documentation, or oversight. Security and risk teams then must map every workflow against all of the different (and sometimes conflicting) requirements. Even the basic definition of what counts as AI varies across states. The same system that may be regulated in one jurisdiction might be unregulated in another.

Large enterprises can usually keep up. With dedicated legal and compliance teams—and the budget to match—they can absorb the cost of audits, system changes, and frequent policy updates. Small and midsize companies don’t have this luxury. Early-stage AI innovators now face an unnecessary choice: devote limited resources to tracking and meeting dozens of regulatory obligations or slow development and risk falling behind. Even when well-intentioned, fragmentation becomes a gatekeeper—creating an environment where only the largest companies can operate at scale. This distorts the market by concentrating innovation in the most well-funded firms and making it harder for smaller teams to break through. The result is an uneven AI ecosystem shaped more by regulatory barriers than by technical capability.

The growing divide

The effects of widespread, conflicting regulations and expectations extend far beyond mere inconvenience. Fragmentation weakens security, reduces public trust, and increases risk across the full AI supply chain. When organizations must focus primarily on compliance, safety and ethics become secondary. Teams spend more time tracking state-level requirements than building the controls that matter most—creating potential gaps in oversight, testing, and transparency.

Regulatory inconsistencies also let large organizations gravitate toward jurisdictions with the most favorable rules. In practice, they can design their practices around minimum standards, rather than the strongest ones. Smaller companies cannot do this; to stay compliant, they often have to meet multiple sets of requirements at once. This uneven burden puts them at a disadvantage and creates a multi-track environment in which safety practices vary widely.

Organizations invite risk with inconsistent standards. In cybersecurity, fragmented controls are never effective. AI security is no different. Attackers exploit the weakest point. When rules vary widely, so do protections, which leaves openings for misuse, bias, faulty automation, and other cascading failures in interconnected systems. A world where AI safety depends on geography is not a world that advances trust.

The only sustainable path

A unified federal framework is required to establish clear expectations for transparency, accountability, and responsible innovation. AI operates across borders, and oversight must operate across borders as well.

The window for federal leadership is closing, and the economic consequences of inaction are becoming harder to ignore. As AI advances faster than state legislatures can respond, the patchwork of rules becomes more complex and more burdensome—especially for startups and smaller innovators who lack the resources to navigate it. Without swift national guidance, the U.S. risks hard coding a system where only the largest enterprises can afford to compete, stifling innovation long before consistent protections are ever put in place.

Advocacy organizations such as Build American AI play a valuable role in advancing this shift. Groups like this are rare, and they shouldn’t be. Clear federal guidance can support innovation while ensuring meaningful safeguards. Consistent national standards would reduce ambiguity, close regulatory loopholes, and give organizations a clear set of expectations that govern their work.

Such consistency benefits security teams, policymakers, and developers across the ecosystem. A unified approach enables organizations to invest in the protections that matter rather than diverting attention toward managing conflicting requirements. It encourages competition by allowing smaller companies to focus on innovation instead of compliance triage. It also raises the overall standard for safe AI development.

Transparency, governance, and a path forward

A more secure and consistent AI landscape begins with federal alignment. A single national framework that is capable of efficiency and flexibility would replace the state-level requirements that currently conflict and delay AI development. This would prevent situations where an identical AI model faces one set of obligations in California and an entirely different set in Florida. With a unified baseline, organizations could invest in long term safeguards rather than repeatedly adjusting to shifting geographic rules.

Internal governance plays an equally important role. An ethics-centered approach ensures that organizations are building systems that are safe even when regulations are unclear or incomplete. This includes responsible data practices, model testing, and ongoing issues such as bias drift or inaccurate outputs. A team designing an AI tool for patient intake, for example, needs a clearly defined process for detecting, documenting, and resolving errors. These internal controls strengthen both security and trust.

Transparency and interpretability round out the foundation for responsible AI. Systems that allow teams to understand how decisions are made, make it easier to catch misuse or unintended behavior. A fraud detection model that shows which signals influence its decisions is easier to audit and fix than a “closed box” model that doesn’t. Organizations that are early adopters of explainable and auditable tools will be better prepared for future oversight and better equipped to respond when risks emerge.

Aligning oversight with the reality of AI

A unified federal approach to AI could provide benefits across the entire AI ecosystem. Innovation can expand because smaller organizations are no longer hindered by conflicting state requirements. Security would improve because consistent expectations eliminate weak links and close opportunities for misuse. Trust will grow as transparent interpretable systems become the norm rather than the exception.

AI does not recognize borders. Regulation should reflect that reality. Unified guidance does not slow the evolution of technology. It creates a stronger, safer, and more sustainable environment that supports responsible innovation for everyone.

Kevin Kirkwood is the chief information security officer at Exabeam.

The post AI doesn’t care if it’s in California or Texas. It just runs. appeared first on CyberScoop.

How to determine if agentic AI browsers are safe enough for your enterprise

Agentic AI browsers like OpenAI’s Atlas have debuted to major fanfare, and the enthusiasm is warranted. These tools automate web browsing to close the gap between what you want to accomplish and getting it done. Rather than manually opening multiple tabs, you can simply tell the browser what you need. Ask it to file a competitor brief, filling out a form, or schedule a meeting, and it will handle the task while you watch.

But with this evolution comes a stark reality: agentic browsers expand the enterprise attack surface in unprecedented ways. As the web shifts from something we browse to something that acts on our behalf, the stakes get higher. Agentic AI browsers are no longer passive tools. They take initiative, operate on our behalf, and in some cases, act with administrative privilege. That represents a seismic shift in trust and risk.

The browsing revolution: From reader to actor

Agentic AI is an execution model. It interprets a user’s intent, plans a series of actions, and executes them autonomously across websites. Over the past few months, I’ve tested several agentic browsers (Atlas, Comet, Dia, Surf, and Fellou) extensively and conducted limited testing with others (Neon and Genspark).

Each browser represents a distinct approach to the same fundamental challenge: how to eliminate constant tab-switching and let users complete tasks in one place. Atlas, built on ChatGPT, emphasizes supervised actions within a browsing sandbox. Comet prioritizes “research velocity,” using coordinating agents across multiple tabs to gather information faster. Neon offers a comprehensive browser automation experience with the option to run it on your own machine. Genspark and Fellou are designed to take more actions with less human oversight.

Yet as these tools grow more capable, they grow correspondingly more dangerous.

The hidden security threats

Conventional browser security measures, like TLS encryption and endpoint protection, weren’t designed to handle the risk that AI agents create. These tools introduce several significant new attack vectors. These include:

Indirect Prompt Injection: Malicious instructions can be embedded in websites in ways invisible to the user. The agent, tasked with interpreting and acting on content, may misinterpret these cues as legitimate directives. Imagine a rogue blog post containing hidden HTML that causes your agent to email internal documents to an attacker. If the browser agent treats that action as part of the task flow, damage can be done before any human intervenes.

Clipboard and Credential Artifacts: Some agents interact with your clipboard or browser session to perform actions. If the agent can access sensitive tokens or passwords, particularly without clear logs or approval workflows, an attacker could manipulate this access through crafted web content.

Opaque Execution Flows: Many of these browsers operate with black-box agents. Without fine-grained logs, rollback capabilities, or sandboxing, users often remain unaware of what the agent is doing in the background until it’s too late. Comet, for instance, offers impressive speed but has demonstrated vulnerabilities to prompt injection and credential misuse.

Over-Privileged Automation: It’s tempting to let the AI agent access everything, especially when tasks involve multiple sites, accounts, and tools. But granting such control without granular permissions or approval checkpoints opens the door to lateral movement attacks—where a compromised agent becomes a gateway to your broader systems.

Without clear guardrails like scoped permissions, transparent logs, and sandboxing, these tools can unintentionally execute malicious or unauthorized actions on behalf of the user.

Governance isn’t optional

Enterprise buyers must stop thinking of governance as a secondary concern. The most secure tools are those that limit what agents can do.

Atlas, for example, confines actions to a supervised mode (“Watch Mode”) for sensitive sites, requiring active oversight before anything consequential happens. Neon executes actions locally in the user’s session, avoiding the transfer of credentials to a cloud agent. Surf (now open source) and Dia (recently acquired by Atlassian) don’t let agents take actions independently, limiting the attack surface.

Genspark and Fellou, on the other hand, promise sweeping autonomy. Their security profiles reflect that ambition, with user reviews calling out instability, unverifiable claims, and the need for sandboxed, staged rollouts.

Practical advice for enterprise leaders

For enterprises interested in these new browsers but concerned about security, the answer is simple: start narrow. Begin with a few, well-defined workflows rather than deploying agents across the organization. Choose three specific tasks, like drafting a competitor brief, reviewing  vendor RFPs, or arranging travel. Then track key metrics: speed of completion, frequency of mistakes, and quality of results.

Next, apply enterprise-grade controls. These include:

  • Requiring approval for each action when the agent sends messages, emails, or makes purchases.
  • Using role-based access to limit what agents can touch.
  • Keeping critical systems (e.g., HRIS, financial tools, source code repositories) completely out of scope.
  • Insisting on transparent logs that record each action taken by the agent and the input that triggered it.

It’s equally critical to train your users. Even basic training on how to write good prompts makes a big difference. Help teams understand how agents interpret language, how prompt injection works, and how to spot suspicious outputs.

Most importantly, don’t bet everything on one browser. Instead, choose an agent that operates with more independence (like Comet or Atlas) for low-risk workflows, and pair it with a more guided tool (like Dia) for employees who need support but not full automation.

A measured optimism

Despite the risks, I remain optimistic. The shift to agentic browsing is fundamentally reshaping how we work. Applied correctly and judiciously, these tools will save time, reduce friction, and help users unlock insights faster than ever before.

But we cannot afford to conflate novelty and safety. The burden is on vendors to bake in controls, not bolt them on, and on enterprises to pilot thoughtfully, not plunge ahead. We’ve seen this pattern previously with browser extensions, mobile apps, and cloud-first tools. Those who approached with healthy skepticism and robust guardrails were the ones who reaped the benefits without the breaches. Agentic AI will be no different.

Shanti Greene is head of data science and AI innovation at AnswerRocket.

The post How to determine if agentic AI browsers are safe enough for your enterprise appeared first on CyberScoop.

The quiet revolution: How regulation is forcing cybersecurity accountability

Cybersecurity headlines still focus on the headline-grabbing moments, whether it’s the latest breach, a zero-day exploit, or an eye-catching product launch. However, beneath the surface noise, a quieter but more profound transformation is taking place—driven by regulations that are changing the way organizations think about, approach, and communicate on security.”

Across the globe, new standards and frameworks, including the EU’s Digital Operational Resilience Act (DORA) and the U.S. government’s Secure-by-Design Principles, as well as the Securities and Exchange Commission’s enhanced disclosure rules, are shifting accountability from aspiration to expectation. For security leaders, these are more than checkboxes. They’re the building blocks for a cultural revolution that rewards transparency, enforces architectural rigor, and reshapes how teams communicate risk from the SOC up to the C-suite.

Regulation as a cultural driver

For years, compliance was viewed as the bureaucratic, paperwork-heavy aspect of cybersecurity. It included an audit here, a checkbox there, and then it was back to business. Today’s frameworks are evolving to ask more complex questions. They no longer focus solely on whether basic security measures are in place, but challenge organizations to demonstrate deeper levels of readiness and accountability. For example, can you show that you have real-time awareness of what’s happening in your environment? Can you provide evidence that your systems were designed with security in mind and not with patches after vulnerabilities were discovered? And when a breach does occur, can you clearly and credibly explain how it was handled?

Statistics reinforce this shift. For example, law firm Greenberg Traurig published in February 2025 that, since April 2024, 41 companies have disclosed cybersecurity incidents via Form 8-K in the U.S., with 15 of those filings under the mandatory Item 1.05 (material incidents). 

Taking a broader perspective, the average cost of a data breach has reached $4.88 million, a 10% year-over-year increase, according to DeepStrike, a company that provides penetration testing services. This illustrates that disclosure and accountability are rising in significance, and regulators are signaling that silent or slow responses are no longer acceptable.

This shift is less about bureaucracy and more about culture. It’s forcing teams to internalize accountability and to treat transparency, architecture, and communication as everyday disciplines rather than once-a-year compliance events.

From compliance to everyday behavior

Organizations that are successfully adapting to today’s evolving security landscape are embracing fundamental cultural shifts. One of the most significant changes is a growing emphasis on transparency. As breach disclosure rules and resilience mandates redefine incident response, the goal is credible communication versus quiet containment. 

Another key shift is the increasing role of architecture in driving security outcomes. The growing “secure by design” movement is making cybersecurity a core engineering principle. This means building systems that prioritize visibility, centralizing logs for better monitoring, and maintaining a comprehensive understanding of assets. These foundational practices are what separate resilient organizations from those that are vulnerable.

Equally important is the move toward greater cross-team accountability. Today’s regulatory environment demands multidisciplinary cooperation. Security cannot operate in isolation from compliance, engineering, or communications. In this approach, regulation forces legal, technical, and operational alignment.

Practical steps to get ahead

Rather than scrambling to satisfy every new rule, forward-looking leaders can use regulation as a blueprint for maturity. These are three practical strategies:

The first step is to build compliance into your design process. Start by including regulatory requirements in product plans and infrastructure from the outset—this is far cheaper and more effective than retrofitting. For example, set up centralized logging and encryption at the architecture stage and use security checklists during sprints. Involve legal teams early to clarify reporting obligations, avoiding surprises later. Treat compliance as an integral part of development, not just a final check.

Next, focus on security basics. Core areas like employee training, asset inventory, vulnerability management, and centralized logging are essential. Reliable asset inventories help track systems and ownership, while secure configurations and automated patching reduce risks. Tabletop exercises with leadership and legal teams build preparedness. Regulators increasingly expect these fundamentals to be in place and regularly tested.

Finally, measure metrics that truly matter. Instead of tallying alerts, track things like Mean Time to Detect (MTTD), Mean Time to Disclose (MTTD), secure configuration rates, logging coverage, and the speed of vulnerability response. Use these insights for board reporting and to demonstrate improving security maturity.

Finally, leaders should build a culture that prepares for failure by asking, “If we were breached tomorrow, what would fail?” This reverse-engineering mindset promotes proactive ownership and is a powerful cultural signal that accountability is everyone’s job.

Accountability becomes an advantage

What this quiet revolution yields is a new definition of maturity. This does not require perfection, but accountability. Organizations, their leaders, and their security teams will still face incidents. However, what is changing is the expectation of a response. In this culture, transparency and preparedness become competitive differentiators rather than risks.

As I’ve laid out, regulation is accelerating this shift. The most important story in cybersecurity today is not about the next breach, but how organizations respond and evolve in light of accountability. It’s a transformation of culture, and the leaders who embrace it will find themselves ahead of the curve.

Robert Rea is Chief Technology Officer at Graylog, where he leads product and engineering strategy. 

The post The quiet revolution: How regulation is forcing cybersecurity accountability appeared first on CyberScoop.

GRC for Security Managers: From Checklists to Influence

This webcast was originally aired on January 16, 2025. In this video, Kelli K. Tarala and CJ Cox discuss the challenges and strategies for improving governance, risk, and compliance (GRC) […]

The post GRC for Security Managers: From Checklists to Influence appeared first on Black Hills Information Security, Inc..

Cyber Risk Lessons We Can Learn From Hurricane Preparedness

Risk is real. To better understand cybersecurity risk, let’s compare cyber risks to risks in the natural world from hurricanes. We can learn lessons from hurricanes and unnamed storms in […]

The post Cyber Risk Lessons We Can Learn From Hurricane Preparedness appeared first on Black Hills Information Security, Inc..

Why Do Car Dealers Need Cybersecurity Services? 

Tom Smith // At Black Hills Information Security (BHIS), we deal with all manner of clients, public and private. Until a month or two ago, though, we’d never dealt with […]

The post Why Do Car Dealers Need Cybersecurity Services?  appeared first on Black Hills Information Security, Inc..

❌