Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

DOJ, Georgia Tech affiliate company settle over alleged failure to meet DOD contract cyber requirements

30 September 2025 at 14:01

A company affiliated with the Georgia Institute of Technology agreed to pay $875,000 to the U.S. government to settle a case involving allegations that it knowingly failed to meet cybersecurity requirements for obtaining Pentagon contracts, the Justice Department announced Tuesday.

Two Georgia Tech whistleblowers who worked on the university’s cybersecurity team first filed suit in 2022 under the False Claims Act, a Civil War-era law aimed at combatting shady contractors. The Justice Department joined the suit two years later on behalf of the Defense Department, Air Force and Defense Advanced Research Projects Agency.

The settlement resolves the suit against Georgia Tech and Georgia Tech Research Corporation over allegations that they failed to install antivirus tools at Georgia Tech’s Astrolavos Lab while it conducted sensitive cyber-defense research for the Pentagon. The Justice Department also had said that Georgia Tech and the affiliate company submitted a false cybersecurity assessment score to the Defense Department.

“When contractors fail to follow the required cybersecurity standards in their DoD contracts, they leave sensitive government information vulnerable to malicious actors and cyber threats,” said Brett Shumate, assistant attorney general of the Justice Department’s Civil Division.

Under the settlement agreement, neither side concedes to the other over the allegations.

“From the outset, Georgia Tech denied the government’s allegations that mischaracterized our commitment to cybersecurity,” said a spokesperson for the university, Blair Meeks. “We worked hard to educate the government about the strong compliance efforts of our researchers and are pleased to avoid the distraction of litigation by resolving this matter without any admission of liability. Georgia Tech looks forward to continued collaboration with the Department of Defense and other federal partners in conducting ground-breaking research in a secure manner.”

The two sides first reached a tentative settlement agreement in May. The government will pay the two whistleblowers, Kyle Koza and Christopher Craig, $201,250 out of the settlement.
The Justice Department began using the False Claims Act in 2022 to punish contractors over cybersecurity shortcomings under its Civil Cyber-Fraud Initiative. It has since settled with a number of parties in those cases, including for $9 million with Aerojet Rocketdyne, $8.4 million with Raytheon and Nightwing, $4.6 million with MORSECORP and $4 million with Verizon Business Network Services.

The post DOJ, Georgia Tech affiliate company settle over alleged failure to meet DOD contract cyber requirements appeared first on CyberScoop.

Workado settles with FTC over allegations it inflated its AI detectors’ capabilities 

By: djohnson
29 August 2025 at 13:16

The Federal Trade Commission thinks AI detectors might be BS.

The agency announced a consent order this week with Workado, an Arizona-based company that makes an AI content detector tool. The order forces the company to  retract its public claims about the tool’s effectiveness and to notify its customers. 

The settlement follows an investigation by the FTC this past year into Workado’s public claims that its AI content detector could determine with near-perfect accuracy whether a piece of text was generated by popular commercial large language chat models from OpenAI, Anthropic, Google and others.

That included claims that the detector “is one of the most trusted and goes deeper than a generic AI detector.” It claims to accurately detect AI-generated content 98% of the time, while at the same time offering a pro version of the software that it claimed could “transform AI text into undetectable AI content.”

But according to an FTC complaint in April, Workado “did not build, train or finetune” the actual AI model behind its product, which was pulled from Hugging Face, an open-source and publicly available AI repository.

That model was only trained on academic content — not Wikipedia, blogs and other sources — and limited to ChatGPT, excluding other commercial models. The developers’ testing data “also showed that the AI Model struggled to identify AI-generated content as AI-generated when evaluating nonacademic content, correctly detecting AI-generated text merely 53.2% of the time,” not 98% as Workado claimed.

“Consumers trusted Workado’s AI Content Detector to help them decipher whether AI was behind a piece of writing, but the product did no better than a coin toss,” said Chris Mufarrige, Director of the FTC’s Bureau of Consumer Protection, in April. 

The FTC settlement specifies that Workado “must not make any representation expressly or by implication” about the effectiveness of its product at detecting AI-generated or altered content “unless the representation is non-misleading.”

In order to do that, Workado must ensure that “at the time such representation is first made, and each time such representation is made thereafter, they possess and rely upon competent and reliable evidence, which when appropriate based on the expertise of professionals in the relevant area must be competent and reliable scientific evidence, that is sufficient in quality, quantity, and timeliness based on standards generally accepted in the relevant fields when considered in light of the entire body of relevant and reliable evidence, to substantiate that the representation is true.”

Essentially, that means every time Workado publicly claims its software can spot signs of AI manipulation, it must repeat its testing process and update the software to keep pace with newer models. As part of the order, the company is required to securely store all test data and related documentation for future review and submit to ongoing government compliance monitoring.

Workado, which did not formally acknowledge wrongdoing as part of the order, must also contact its customers using an FTC-drafted letter to acknowledge it settled charges of false or unsubstantiated advertising claims about the accuracy of its AI content detector.

“We claimed that our AI Content Detector will predict with a 98% accuracy rate whether text was created by AI content generators like ChatGPT, GPT4, Claude, and Bard,” the draft letter states. “The FTC says we didn’t have proof to back up those claims. We’ve stopped making those claims. In the future, we won’t make claims about the accuracy of our AI content detection tools unless we can prove them.”

To be clear, designing a program that can reliably detect AI-generated content models over long periods of time is a challenging, but legitimate, field. Because both deepfakes and deepfake detectors are built on the same underlying LLM technology, their algorithms can learn from each other’s innovations and models can be trained to more effectively find (or evade) each other. This creates a perpetual cat-and-mouse game, where the effectiveness of AI detectors gradually degrades over time unless they’re updated.

Researchers at the Defense Advanced Research Projects Agency (DARPA) have been aware of this problem for years and have worked to design systems that can both accurately identify AI-manipulated content in text, video and audio. They also have designed these systems to be adaptable,  allowing them to evolve as AI technology advances.. While there is a clear need for forensic tools to analyze media for synthetic content, creating solutions that can consistently remain effective will always be a moving target.

But the investigation and settlement with Workado demonstrates that the FTC understands the fluidity of the science behind AI detection, and believes the bar for companies to claim their tools work as intended is higher and requires constant, science-backed vigilance to remain true over time.

The post Workado settles with FTC over allegations it inflated its AI detectors’ capabilities  appeared first on CyberScoop.

The overlooked changes that two Trump executive orders could bring to cybersecurity

13 August 2025 at 15:04

Two executive orders President Donald Trump has signed in recent months could prove to have a more dramatic impact on cybersecurity than first thought, for better or for worse.

Overall, some of Trump’s executive orders have been more about sending a message than spurring lasting change, as there are limits to their powers. Specifically, some of the provisions of the two executive orders with cyber ramifications — one from March on state and local preparedness generally, and one from June explicitly on cybersecurity — are more puzzling to cyber experts than anything else, while others preserve policies of the prior administration which Trump has criticized in harsh terms. Yet others might fall short of the orders’ intentions, in practice.

But amid the flurry of personnel changes, budget cuts and other executive branch activity in the first half of 2025 under Trump, the full scope of the two cyber-related executive orders might have been somewhat overlooked. And the effects of some of those orders could soon begin coming to fruition as key top Trump cyber officials assume their posts.

The Foundation for Defense of Democracies’ Mark Montgomery said the executive orders were “more important” than he originally understood, noting that he “underestimated” the March order after examining it more closely. Some of the steps would be positive if fully implemented, such as the preparedness order’s call for the creation of a national resilience strategy, he said.

The Center for Democracy & Technology said the June order, which would unravel some elements of executive orders under presidents Joe Biden and Barack Obama, would have a negative effect on cybersecurity.

“Rolling back numerous provisions focused on improving cybersecurity and identity verification in the name of preventing fraud, waste, and abuse is like claiming we need safer roads while removing guardrails from bridges,” said the group’s president, Alexandra Reeve Givens. “The only beneficiaries of this step backward are hackers who want to break into federal systems, fraudsters who want to steal taxpayer money from insecure services, and legacy vendors who want to maintain lucrative contracts without implementing modern security protections.”

The big changes and the in-betweens

Perhaps the largest shift in either order is the deletion of a section of an executive order Biden signed in January on digital identity verification that was intended to fight cybercrime and fraud. In undoing the measures in that section, the White House asserted that it was removing mandates “that risked widespread abuse by enabling illegal immigrants to improperly access public benefits.”

One critic, speaking on condition of anonymity to discuss the changes candidly, said “there’s not a single true statement or phrase or word in” the White House’s claim. The National Security Council did not respond to requests for comment on the order.

Some, though, such as Nick Leiserson of the Institute for Security and Technology, observed that the digital identities language in the Biden order was among the “weakest” in the document, since it only talked about how agencies should “consider” ways to accept digital identities.

The biggest prospective change in the March order was a stated shift for state and local governments to handle disaster preparedness, including for cyberattacks, a notion that drew intense criticism from cyber experts at the time who said states don’t have the resources to defend themselves against Chinese hackers alone. But that shift could have bigger ripples than originally realized.

Errol Weiss, chief security officer at the Health-ISAC, an organization devoted to exchanging threat information in the health sector, said that as the Cybersecurity and Infrastructure Security Agency has scaled back the free services it offers like vulnerability scanning, states would hypothetically have to step into that gap to aid entities like the ones Weiss serves. “If that service goes away, and pieces of it probably already have, there’s going to be a gap there,” he said.

Some of the changes from the March order might only be realized now that the Senate has confirmed Sean Cairncross as national cyber director, or after the Senate takes action on Sean Plankey to lead CISA, said Jim Lewis, a fellow at the Center for European Policy Analysis.

For instance: The order directs a review of critical infrastructure policy documents, including National Security Memorandum 22, a rewrite of a decade-old directive meant to foster better threat information sharing and respond to changing threats. There are already signs the administration plans to move away from that memorandum, a development that a Union of Concerned Scientists analyst said was worrisome, but critics of the memo such as Montgomery said a do-over could be a good thing.

Most of the other biggest potential changes, however, are in the June order. This is a partial list:

  • It eliminates a requirement under the January Biden order that government vendors provide certifications about the security of their software development to CISA for review. “I just don’t think that you can play the whole, ‘We care about cyber,’ and, ‘Oh, by the way, this incredible accountability control? We rolled that back,’” said Jake Williams, director of research and development at Hunter Strategy.
  • It removes another January Biden order requirement that the National Institute of Standards and Technology develop new guidance on minimum cybersecurity practices, thought to be among that order’s “most ambitious prescriptions.”
  • It would move CISA in the direction of implementing a “no-knock” or “no-notice” approach to hunting threats within federal agencies, Leiserson noted.
  • It strikes language saying that the internet data routing rules known as Border Gateway Protocol are “vulnerable to attack and misconfiguration,” something Williams said might ease pressure on internet service providers to make improvements. “The ISPs know it’s going to cost them a ton to address the issue,” he said.
  • It erases a requirement from the Biden order that contained no deadline, but said that federal systems must deploy phishing-resistant multi-factor authentication. 
  • It deletes requirements for pilot projects stemming from the Defense Advanced Research Projects Agency-led Artificial Intelligence Cyber Challenge. DARPA recently completed its 2025 challenge, awarding prize money at this year’s DEF CON cybersecurity conference.
  • It says that “agencies’ policies must align investments and priorities to improve network visibility and security controls to reduce cyber risks,” a change security adviser and New York University adjunct professor Alex Sharpe praised.

Some of the changes led to analysts concluding, alternatively, a continuation or rollback of directives from the January Biden executive order on things like federal agency email encryption or post-quantum cryptography.

The head-scratchers and the mysteries

Some of the moves in the June order perplexed analysts.

One was specifying that cyber sanctions must be limited, in the words of a White House fact sheet, “to foreign malicious actors, preventing misuse against domestic political opponents and clarifying that sanctions do not apply to election-related activities.” The Congressional Research Service could find no indication that cyber sanctions had been used domestically, and said the executive order appears to match prior policy.

Another is the removal of the NIST guidance on minimum cybersecurity practices. “If you’re trying to deregulate, why kill the effort to harmonize the standards?” Sharpe asked. 

Yet another is deletion of a line from the January Biden order to the importance of open-source software. “This is a bit puzzling, as open source software does underlie almost all software, including federal systems,” Leiserson wrote (emphasis his).

Multiple sources told CyberScoop it’s unclear who wrote the June order and whom they consulted with in doing so. One source said some agency personnel complained about the lack of interagency vetting of the document. Another said Alexei Bulazel, the NSC director of cyber, appeared to have no role in it.

Another open question is how much force will be put behind implementing the June order.

It loosens the strictness with which agencies must carry out the directives it lays out, at least compared with the January Biden order. It gives the national cyber director a more prominent role in coordination, Leiserson said. And it gives CISA new jobs.

“Since President Trump took office — and strengthened by his Executive Order in June — CISA has taken decisive action to bolster America’s cybersecurity, focusing on critical protections against foreign cyber threats and advancing secure technology practices,” said Marci McCarthy, director of public affairs for CISA.

California Rep. Eric Swalwell, the top Democrat on the House Homeland Security Committee’s cyber subpanel, told CyberScoop he was skeptical about what the June executive order signalled about Trump’s commitment to cybersecurity.

“The President talks tough on cybersecurity, but it’s all for show,” he said in a statement. “He signed the law creating CISA and grew its budget, but also rolled back key Biden-era protections, abandoned supply chain efforts, and drove out cyber experts. CISA has lost a third of its workforce, and his FY 2026 budget slashes its funding …

“Even if his cyber and AI goals are sincere, he’s gutted the staff needed to meet them,” Swalwell continued. “He’s also made the government less secure by giving unvetted allies access to sensitive data. His actions don’t match his words.”

Montgomery said there was a contradiction between the June order giving more responsibilities to agencies like NIST while the administration was proposing around a 20% cut to that agency, and the March order shifting responsibilities to state and local governments without giving them the resources to handle it.

A WilmerHale analysis said that as the administration shapes cyber policy, the June order “signals what that approach is likely to be: removing requirements perceived as barriers to private sector growth and expansion while preserving key requirements that protect the U.S. government’s own systems against cyber threats posed by China and other hostile foreign actors.”

For all of the changes it could make, analysts agreed the June order does continue a fair number of Biden administration policies, like commitments to the Cyber Trust Mark labeling initiative, space cybersecurity policy and requirements for defense contractors to protect sensitive information.

Some of those proposals didn’t get very far before the changeover from Biden to Trump. But it might be easier for the Trump administration to achieve its goals.

“It’s hard to say the car is going in the wrong direction when they haven’t started the engine,” Lewis said. “These people don’t have the same problem, this current team, because they’re stripping stuff back. They’re saying, ‘We’re gonna do less.” So it’s easier to do less.”

The post The overlooked changes that two Trump executive orders could bring to cybersecurity appeared first on CyberScoop.

DARPA’s AI Cyber Challenge reveals winning models for automated vulnerability discovery and patching

8 August 2025 at 17:53

The Pentagon’s two-year public competition to spur the development of cyber-reasoning systems that use large language models to autonomously find and patch vulnerabilities in open-source software concluded Friday with $8.5 million awarded to three teams of security specialists at DEF CON. 

The Defense Advanced Research Project Agency’s AI Cyber Challenge seeks to address a persistent bottleneck in cybersecurity — patching vulnerabilities before they are discovered or exploited by would-be attackers. 

“We’re living in a world right now that has ancient digital scaffolding that’s holding everything up,” DARPA Director Stephen Winchell said. “A lot of the code bases, a lot of the languages, a lot of the ways we do business, and everything we’ve built on top of it has all incurred huge technical debt… It is a problem that is beyond human scale.” 

The seven semifinalists that earned their spot out of 90 teams convened at last year’s DEF CON were scored against their models’ ability to quickly, accurately and successfully identify and generate patches for synthetic vulnerabilities across 54 million lines of code. The models discovered 77% of the vulnerabilities presented in the final scoring round and patched 61% of those synthetic defects at an average speed of 45 minutes, the competition organizers said.

The models also discovered 18 real zero-day vulnerabilities, including six in the C programming language and 12 in Java codebases. The teams’ models patched none of the C codebase zero-days, but automatically patched 11 of the Java zero-days, according to the final results shared Friday.

Team Atlanta took the first-place prize of $4 million, Trail of Bits won second place and $3 million in prize money, and Theori ranked third, taking home $1.5 million. The competition’s organizers allocated an additional $1.4 million in prize money for participants who can demonstrate when their technology is deployed into critical infrastructure. 

Representatives from the three winning teams said they plan to reinvest the majority of the prize money back into research and further development of their cyber-reasoning systems or explore ways to commercialize the technology.

Four of the models developed under the competition were made available as open source Friday, and the three remaining models will be released in the coming weeks, officials said.

“Our hope is this technology will harden source code by being integrated during the development stage, the most critical point in the software lifecycle,” Andrew Carney, program manager of the competition, said during a media briefing about the challenge last week. 

Open sourcing the cyber-reasoning systems and the AI Cyber Challenge’s infrastructure should also allow others to experiment and improve upon what the competition helped foster, he said. DARPA and partners across government and the private sector involved in the program are pursuing paths to push the technology developed during the competition into open-source software communities and commercial vendors for broader adoption.

DARPA’s AI Cyber Challenge is a public-private endeavor, with Google, Microsoft, Anthropic and OpenAI each donating $350,000 in LLM credits and additional support. The initiative seeks to test AI’s ability to identify and patch vulnerabilities in open-source code of vital importance throughout critical infrastructure, including health care. 

Jim O’Neill, deputy secretary of the Department of Health and Human Services, spoke to the importance of this work during the AI Cyber Challenge presentation at DEF CON. “Health systems are among the hardest networks to secure. Unlike other industries, hospitals must maintain 24/7 uptime, and they don’t get to reboot. They rely on highly specialized, legacy devices and complex IT ecosystems,” he said. 

“As a result, patching a vulnerability in health care can take an average of 491 days, compared to 60 to 90 days in most other industries,” O’Neill added. “Many cybersecurity products, unfortunately, are security theater. We need assertive proof-of-work approaches to keep networks, hospitals and patients safer.”

Health officials and others directly involved in the AI Cyber Challenge acknowledged the problems posed by insecure software are vast, but said the results showcased from this effort provide a glimmer of hope. 

“The magnitude of the problem is so incredibly overwhelming and unreasonable that this is starting to make it so that maybe we can actually secure networks — maybe,” Jennifer Roberts, director of resilient systems at HHS’s Advanced Research Projects Agency for Health, said during a media briefing at DEF CON after the winners were announced. 

Kathleen Fisher, director of DARPA’s Information Innovation Office, shared a similar cautiously optimistic outlook. “Software runs the world, and the software that is running the world is riddled with vulnerabilities,” she said.

“We have this sense of learned helplessness, that there’s just nothing we can do about it. That’s the way software is,” she continued. The AI Cyber Challenge “points to a brighter future where software does what it’s supposed to do and nothing else.”

The post DARPA’s AI Cyber Challenge reveals winning models for automated vulnerability discovery and patching appeared first on CyberScoop.

❌
❌