Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Anthropic rolls out embedded security scanning for Claude 

By: djohnson
20 February 2026 at 16:40

Anthropic is rolling out a new security feature for Claude Code that can scan a user’s software codebases for vulnerabilities and suggest patching solutions.

The company announced Friday that Claude Code Security will initially be available to a limited number of enterprise and team customers for testing. That follows more than a year of stress-testing by the internal red teamers, competing in cybersecurity Capture the Flag contests and working with Pacific Northwest National Laboratory to refine the accuracy of the tool’s scanning features.

Large language models have shown increasing promise at both code generation and cybersecurity tasks over the past two years, speeding up the software development process but also lowering the technical bar required to create new websites, apps and other digital tools.

“We expect that a significant share of the world’s code will be scanned by AI in the near future, given how effective models have become at finding long-hidden bugs and security issues,” the company wrote in a blog post.

Those same capabilities also let bad actors scan a victim’s IT environment faster to find weaknesses they can exploit. Anthropic is betting that as  “vibe coding” becomes more widespread, the demand for automated vulnerability scanning will pass the need for manual security reviews.

As more people use AI to generate their software and applications, an embedded vulnerability scanner could potentially reduce the number of vulnerabilities that come with it. The goal is to reduce large chunks of the software security review process to a few clicks, with the user approving any patching or changes prior to deployment.

Anthropic claims that Claude Code Security “reads and reasons about your code the way a human researcher would,” showing an understanding of how different software components interact, tracing the flow of data and catching major bugs that can be missed with traditional forms of static analysis.

“Every finding goes through a multi-stage verification process before it reaches an analyst. Claude re-examines each result, attempting to prove or disprove its own findings and filter out false positives,” the company claimed. “Findings are also assigned severity ratings so teams can focus on the most important fixes first.”

Threat researchers have told CyberScoop that while the cybersecurity capabilities have clearly improved in recent years, they tend to be most effective at finding lower impact bugs, while experienced human operators are still needed in many organizations to manage the model and deal with higher-level threats and vulnerabilities.

But tools like Claude Opus and XBOW have shown the ability to unearth hundreds of software vulnerabilities, in some cases making the discovery and patching process exponentially faster than it was under a team of humans.

Anthropic said Claude Opus 4.6 is “notably better” at finding high-severity vulnerabilities than past models, in some cases identifying flaws that “had gone undetected for decades.”

Interested users can apply for access to the program. Anthropic clarifies on its sign up page that testers must agree to only use Claude Code Security on code their company owns and “holds all necessary rights to scan,” not third-party owned or licensed code or open source projects.

The post Anthropic rolls out embedded security scanning for Claude  appeared first on CyberScoop.

Critics warn America’s ‘move fast’ AI strategy could cost it the global market

By: djohnson
9 February 2026 at 19:33

The Trump administration has made U.S. dominance in artificial intelligence a national priority, but some critics say a light-touch approach to regulating security and safety in U.S. models is making it harder to promote adoption in other countries.

White House officials have said since taking office that Trump intended to move away from predecessor Joe Biden’s emphasis on AI safety. Instead, they would allow U.S. companies to test and improve their models with minimal regulation, prioritizing speed and capability. 

But this has left other stakeholders, including U.S. businesses, to work out the rules of the road for themselves.

Camille Stewart Gloster, a former deputy national cyber director in the Biden administration, now owns and manages her own cyber and national security advisory firm. There are some companies, she said, who “recognize that security is performance.”

This means putting governance and security guardrails in place so the AI behaves as intended, access is tightly restricted , and inputs and outputs are monitored for unsafe or malicious activity that could create legal or regulatory risk.

“Unfortunately [there are] a small amount of organizations that realize it at a real, tangible ‘let’s put the money behind it’ level, and there are a number of small and medium organizations, and even some larger ones, that really just want to move fast and don’t quite understand how to strike that balance,” she said Monday at the State of the Net conference in Washington D.C.

Stewart Gloster said she has seen organizations inadvertently put users at risk by giving AI agents too much authority and too little oversight, leading to disastrous results. One company she advised was “effectively DDoSing their customers” with their AI agent, who was “flooding their customers with notifications to the point where they were upset, but they could not stop it, because cutting off the agent meant cutting off a critical capability.”

The Trump administration and Republicans in Congress have made global AI leadership a top national priority. They argue that new regulations for the fast-growing AI industry would inhibit innovation and make U.S. tech companies less competitive. 

Some worry that the GOP’s zeal to boost U.S. AI companies may backfire. Michael Daniel, former White House Cybersecurity Coordinator during the Obama administration, said artificial intelligence regulations in the U.S. remain woefully inadequate to gain broad adoption in other parts of the world, like Europe, where regulatory safety and security standards for commercial AI models are often higher.

“If we don’t take action here in the United States, we may find ourselves…being forced to play the follower, because not everybody will wait for us,” said Daniel, “And I would say that geopolitics are making that even less likely, and it’s making it more likely that others will move faster and more sharply than the U.S. will.”

One recent example: Elon Musk’s xAI is currently under investigation by multiple regulators on the state and international level following the generation of millions of nonconsensual, deepfakes nudes, sexualized photos and Child Sexual Abuse Material of real user photos by its AI tool Grok. Multiple countries have threatened to ban or restrict the use of X and Grok in their countries over the episode.

Musk himself has at times endorsed Grok’s propensity for making controversial or objectionable content, promoting features like “spicy mode” that make the model more offensive and vulgar, including by generating nude deepfakes generated from photos of real individuals.

AI researcher Emily Barnes noted that Grok’s Spicy Mode “sits squarely in a zone where intellectual property jurisprudence, platform governance and human rights frameworks have yet to align.”

“The result is a capability that can mass-produce non-consensual sexual images at scale without triggering consistent legal consequences” in the U.S.,” she wrote.

Daniel is part of a growing chorus of U.S. policymakers – mostly Democrats – who have argued over the past year that strong security and safety guardrails will help U.S.-made AI models compete on the world stage, not hurt them.

Last year, Sen. Mark Kelly, D-Ariz., urged that similar security and safety protections become a core part of how U.S. AI tools are built “not only to ensure the technology is safe for businesses and individuals to use and isn’t leveraged in widespread discrimination or scamming, but also because they can serve as a key differentiator between the U.S. and other competitors like China and Russia.”

“If we create the rules, maybe we can get our allies to work within the system that we have and we’ve created,” Kelly added. “I think we’ll have leverage there, I hope we do.”Stewart Gloster said that in the absence of direction or regulation by the federal government, industry is finding that any rules of the road around ensuring security and reliability will have to come from companies looking to protect their own brand partnering with other, smaller regulatory stakeholders.

“There are a lot of organizations that are contending with this new role that they must play as [the federal] government pushes down the responsibility of security to state government and as they look to industry to drive what innovation looks like,” she said.

While businesses are starting to have those conversations in trade associations and consortia to brainstorm alternatives, “this is not happening generally.”  

What’s more likely is that legal liability for AI developers, organizations and individuals around AI security and privacy failures will be shaped through lawsuits and the court system.

“That’s probably not the way we want it to happen, because bad facts make bad law, which means if it’s litigated in the courts, we’re likely to see a precedent that is very tailored to that set of facts, and that will be a really tough place for us to operate from,” she said.

The post Critics warn America’s ‘move fast’ AI strategy could cost it the global market appeared first on CyberScoop.

Amazon rolls out AI bug bounty program 

By: djohnson
11 November 2025 at 15:12

Amazon became the latest company to open its large language models to outside security researchers, announcing the creation of a new bug bounty program for the tech giant’s AI tools.

The program will allow select third-party researchers and academic teams to prod NOVA, Amazon’s suite of foundational AI models and receive compensation for their findings. It will cover a range of common vulnerabilities that affect most generative AI systems: prompt injection, jailbreaking and vulnerabilities within the model that have “real-world exploitation potential.” Researchers will also look at how the models could be manipulated to assist in the production of chemical, biological, radiological and nuclear weapons.

“Security researchers are the ultimate real-world validators that our AI models and applications are holding up under creative scrutiny,” Hudson Thrift, CISO of Amazon Stores, said in a statement Tuesday.

Participants will be selected next year by Amazon through an invite-only system, meaning the company will still retain influence over which security researchers get access to their technology. According to the company, it has paid out more than $55,000 to researchers for 30 validated AI-related vulnerabilities under its broader public bug bounty program.

Amazon has bet big on generative AI, developing its own family of commercial large language models (NOVA) while also providing services like Amazon Bedrock that allow customers to access models from other companies like Anthropic, Mistral AI and others.

But as these products have become increasingly integrated within Amazon and user organizations, their safety and security has come with higher stakes and larger potential downstream effects.

“As Nova models power a growing ecosystem across Alexa, AWS customers through Amazon Bedrock, and other Amazon products, ensuring their security remains an essential focus,” Amazon wrote in the announcement, adding “By creating opportunities for hands-on learning and discovery, Amazon is helping raise a new generation of researchers equipped to secure the systems that will define the next era of AI.”

Earlier this year, Amazon held a tournament between 10 university research teams to find bugs and vulnerabilities in Amazon’s coding AI models. Each team received $250,000 and AWS credits upfront to conduct their work, while winners pulled in an additional $700,000 in reward money, according to the company.Their findings included novel bugs and methods for jailbreaking, safety alignment, data poisoning attacks and discovered a number of tradeoffs between security and functionality within Amazon’s NOVA models.

The post Amazon rolls out AI bug bounty program  appeared first on CyberScoop.

OpenAI releases ‘Aardvark’ security and patching model 

By: djohnson
30 October 2025 at 20:42

A new security-focused AI model released Thursday by OpenAI aims to automate bug hunting, patching and remediation.

The model, powered by ChatGPT-5 and given the name Aardvark, has been used internally at OpenAI and among external partners. Currently offered in an invite-only Beta, it’s designed to continuously scan source code repositories to find known vulnerabilities and bugs, assess and prioritize their potential severity, then patch and remediate them.

In a blog post published on the company’s website, OpenAI claims that Aardvark “does not rely on traditional program analysis techniques like fuzzing or software composition analysis.”

“Instead, it uses LLM-powered reasoning and tool-use to understand code behavior and identify vulnerabilities,” the blog stated. “Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more.”

An illustration of how Aardvark, OpenAI’s new security model, works to identify, analyze and then remediate vulnerabilities. (Source: OpenAI)

OpenAI says Aardvark can also develop threat models based on the contents of a repository and project security goals and design, sandbox vulnerabilities to test their exploitability, annotate problematic code and submit proposed patches for human review.

In addition to finding security vulnerabilities, the company said Aardvark has shown the potential to spot logic and privacy bugs in code bases, and identified 92% of known and synthetically introduced vulnerabilities in unspecified “golden” repositories. Members of the open source community who operate noncommercial repositories will be able to use the scanner for free.

The company recently updated its coordinated vulnerability disclosure process in September, rolling out changes that include no longer committing to strict disclosure timelines, which OpenAI said can “pressure developers” and emphasizing broader ecosystem security. The Beta version of the model is currently open to select research partners, and OpenAI said it plans to broaden the tool’s use over time as it refines detection, validation and reporting capabilities.

“By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation,” the blog stated.

Aardvark’s release reflects OpenAI’s desire to leverage their technology for automated vulnerability scanning and remediation, a field where large language models have shown increasing promise and potential over the past year. The company said Aardvark has identified 10 vulnerabilities thus far that have received Common Vulnerabilities and Exposure (CVE)  entries.

Other companies, such as startup XBOW, have been able to develop AI security models over the past year that can ride to the top of bug bounty leaderboards at HackerOne and BugCrowd, run day and night and identify and fix hundreds of vulnerabilities.

XBOW founder Oege de Moor, who previously led GitHub Next, the company’s software research and development division, told CyberScoop in July that their model receives some human guidance on the front and manual validation on the backend, but otherwise runs autonomously during its bug hunting.

While vulnerability research experts have described models like XBOW as more useful for high-volume, low-impact bugs, the company has attempted to showcase the evolving model’s ability to tackle higher complexity bugs and exploits.

An automated program to address the thousands of low-severity bugs plaguing the internet, while freeing up human operators to tackle higher complexity vulnerabilities, would still have tremendous value. Some security experts point out that large cyber intrusions and multi-stage malware attacks are often less about exploiting zero days or high severity bugs and more about chaining together lower- and medium-impact flaws that exist in unpatched systems.

But another consideration around these models is the sheer energy they consume. De Moor said that while XBOW had solved thousands of bugs and received bug bounties and awards for its work, those earnings aren’t enough to cover the total compute costs to run XBOW over that time.

The post OpenAI releases ‘Aardvark’ security and patching model  appeared first on CyberScoop.

❌
❌