Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

HackerOne rolls out industry framework to support ‘good faith’ AI research

By: djohnson
20 January 2026 at 15:59

Four years ago, the Department of Justice announced it would no longer seek criminal charges against independent and third-party security researchers for “good faith” security research under the Computer Fraud and Abuse Act.

Now, a prominent bug bounty platform is attempting to build a framework for industry to offer similar protections to researchers who study flaws in AI systems, including fields like AI safety and others that look at unintended behaviors and outputs that can impact security outcomes.

Ilona Cohen, chief legal and policy officer at HackerOne, told CyberScoop the Good Faith AI Research Safe Harbor is meant to build off previous efforts — like the DOJ policy change and the company’s own Gold Standard Safe Harbor framework — that provide wider legal freedom for third-party security researchers to prod and test commercial products and systems for flaws and expand them to the AI realm.

HackerOne previously pushed the DOJ to provide further guidance on how its good faith researcher policy would apply to AI systems. Cohen said the department’s language “provides a lot of clarity and helped security researchers have the comfort to be able to do the testing that’s so important to the backbone of our security industry, [but] it doesn’t necessarily apply to all AI research.”

The DOJ’s policy change in 2022 represented a hard-fought victory following years of advocacy by the cybersecurity community. Without further guidance from DOJ, Cohen said it was important for industry to do the same foundational work around advocacy and governance for AI testing that helped good faith hackers convince the agency that independent researchers are an asset to the broader cybersecurity ecosystem.

Participating companies can attach a “banner” to their HackerOne profile advertising their adoption of the protections, which commit them to, among other things, “refraining from legal action … and supporting researchers if third parties pursue claims related to authorized research.”

Even as the Trump administration signals little interest in safety or security issues around AI systems, other policymakers have said strong protections and guardrails should be one of the key differentiators when convincing other countries to adopt U.S.-made AI systems and models over authoritarian competitors like China. Cohen said it was especially critical to open testing of AI systems when they’re being broadly adopted across society.

“Since AI systems are essentially deploying a lot faster than any of the governance or legal frameworks can keep up, that creates some risk … for all of us when people are reluctant to do testing of AI systems,” Cohen said.

Frontier AI companies like OpenAI and Anthropic have generally kept a tighter grip on their own security research programs.

OpenAI, for instance, runs its own network of third-party red team researchers, vetting and selecting them through an application process. According to the company’s website, red-team engagements are commissioned by OpenAI and can be steered to different researchers at the company’s discretion, with participation from some members as little as five-to-10 hours per year. Researchers can also apply under a separate program that focuses on issues like AI safety and misuse.

Anthropic’s responsible disclosure policy defines “good faith” third-party security research as testing information systems “for the sole purpose” of identifying a reportable vulnerability. As such, researchers are expected to only take actions that are “minimally required to reasonably prove that such potential vulnerability exists” and avoid actual harmful actions, such as exfiltrating or deleting data.

It also requires the researcher to “avoid disclosing the existence of or any details relating to the discovered vulnerability to a third party or to the public” without “notice” from the company.

“We fully support researchers’ right to publicly disclose vulnerabilities they discover,” the terms state. “We ask only to coordinate on the timing of such disclosures to prevent potential harm to our services, customers and other parties.”

Anthropic’s terms also seek to broadly indemnify them from any negative outcomes related to the use or integration of their products, using all caps to emphasize that it will “EXPRESSLY DISCLAIM” all warranties of fitness their products may have in areas like “ACCURACY, AVAILABILITY, RELIABILLITY, SECURITY, PRIVACY, COMPATABILITY [and] NON-INFRINGEMENT.”

OpenAI and Anthropic did not respond to a request for comment by the time of publication.

The post HackerOne rolls out industry framework to support ‘good faith’ AI research appeared first on CyberScoop.

Inside Vercel’s sleep-deprived race to contain React2Shell

8 January 2026 at 18:01

Talha Tariq and his colleagues at Vercel, the company that maintains Next.js, endured many sleep-deprived nights and weekends when React2Shell was discovered and disclosed soon after Thanksgiving. The defect, which affects vast stretches of the internet’s underlying infrastructure, posed a significant risk for Next.js, an open-source library that depends on vulnerable React Server Components.

He quickly realized he had a major problem to confront with CVE-2025-55182, a maximum-severity vulnerability affecting multiple React frameworks and bundlers that allows unauthenticated attackers to achieve remote code execution in default configurations. 

“It’s literally the very first layer that everybody on the internet interacts with, so from a risk perspective and exposure perspective it’s basically as bad as it could be,” Tariq, the company’s CTO, told CyberScoop.

Tariq and his team initiated and coordinated a massive response effort with major cloud providers, the open source community and technology vendors hours after a developer reported the defect to Meta, which initially created and maintained React before moving the open-source library to the React Foundation in October.

The React team publicly disclosed the flaw with a patch four days later, after Vercel and many other impacted providers implemented platform-level mitigations to minimize damages.

Vercel’s deep integration with and  understanding of React meant it had an outsized responsibility to investigate and share its findings across the industry. Doing so would help validate the patch’s effectiveness and ensure downstream customers understood the potential risk once the vulnerability was disclosed, Tariq said. 

“Nobody slept through the weekend, nobody slept through the night,” he said, adding that it was a 24/7 response for Vercel for a minimum of two weeks — extending beyond the vulnerability disclosure into a cat-and-mouse game with attackers seeking to exploit the defect or bypass the patch.

Cybercriminals, ransomware gangs and nation-state threat groups were all taking swift measures to exploit the vulnerability

Palo Alto Networks’ Unit 42 confirmed more than 60 organizations were directly impacted by attacks involving exploitation of the defect by mid-December. Valid public exploits also hit an all-time high, nearing 200 by that time, according to VulnCheck.

Malicious activity targeting React2Shell remains at a “sustained, elevated pace,” cybersecurity firm GreyNoise said in a Wednesday update. The company’s sensors have observed more than 8.1 million attempted attacks since the defect was disclosed, with daily volumes now ranging between 300,000 and 400,000 after peaking in the final weeks of December.

Vercel also responded to React2Shell with a quickly arranged HackerOne bounty program offering $50,000 for each verified technique that bypassed its web application firewall. More than 116 researchers participated, and Vercel ultimately paid out $1 million for 20 unique bypass techniques. 

The company said this work allowed it to block more than 6 million exploit attempts targeting environments running vulnerable versions of Next.js. Tariq said it was the “best million dollars spent,” considering the potential impact and exposure it contained.

Tariq doesn’t look back on the initial response toReact2Shell with regret. Instead, he sees it as motivation to address a persistent challenge rooted in coordination.

The burden to promptly address security issues with the broader community often falls on individuals like Tariq who relied on personal relationships to coordinate an industry-wide response. This involved direct contact and communication with security leaders at Google, Microsoft, Amazon and others, he said. 

“We have to do better as an industry and figure out a more sustaining way to do this,” Tariq said.

The post Inside Vercel’s sleep-deprived race to contain React2Shell appeared first on CyberScoop.

❌
❌