Reading view
A little weak on being right
U.S. robotics companies want federal help to keep Chinese robots out of America’s networks
Executives at top U.S. robotics companies asked Congress for federal dollars, new legislation and a simpler regulatory field, arguing the support is necessary to adapt to the AI era and compete with their well-oiled, state-funded Chinese competitors.
The U.S. robotics sector, estimated at $50 billion in value, includes world famous companies like Boston Dynamics. The industry is projected to sell millions of robots across the country over the next four years.
According to a 2025 report from the International Federation of Robotics, the market has sold and installed an average of 500,000 robots between 2020 and 2024. China alone accounted for 54% of those installations, compared to just 9% for America.
Matthew Malchano, vice president of software at Boston Dynamics, told lawmakers in the House Homeland Security cyber subcommittee hearing Tuesday that robotics represent the necessary physical infrastructure to support the country’s efforts to dominate the global AI race, with robots, drones and other machines more fully integrating AI systems in the coming years.
He pointed to Chinese companies like Unitree, which are capturing market share with police departments and universities across the United States, despite contracting ties to the Chinese military and cybersecurity vulnerabilities like a wormable exploit found in 2025 that would allow an attacker to takeover fleets of Unitree robots.
Malchano said Unitree is one of “dozens” of Chinese companies propped up by China’s national AI and robotics plan, which “envisions transforming virtually every major industry in China by integrating AI powered robots” through funding and favorable policies.
He pressed U.S. lawmakers for a similar national strategy, and stumped for the passage of the National Commission on Robotics Act, sponsored by Rep. Jay Olbernolte, R-Calif., that would develop a bipartisan commission to drive it.
Max Fenkell, global head of policy and government relations at ScaleAI, said while the U.S. is winning the AI race on its chosen metrics – model quality and chips – it is “losing” on data and implementation.
Unlike large language models, which download training data straight from the internet, AI systems for robots will require unique training data gathered, categorized and labeled through thousands of hours of bespoke testing.
While China has pursued an “industrialized” training strategy in tandem with industry, funding mile-long stretches of warehouses dedicated to gathering training data for Chinese companies, the U.S. has no similar strategy.
“We’re seeing two different races play out and I fear right now the United States may be winning the wrong one,” he said.
Executives at the hearing were unanimous in suggesting Congress block U.S. federal agencies from purchasing Chinese-made robots and create a single federal regulatory standard for the industry, while Fenkell and Malchado asked for the Cybersecurity and Infrastructure Security Agency to conduct a security review of foreign-made robots.
At the hearing, Rep. James Walkinshaw, D-Va., noted a long history of bipartisan cooperation to help U.S. companies compete against state-subsidized Chinese firms.
“With extensive state investment in technology companies and laws that enlist private companies to serve the interest of the government, the PRC’s military-civil fusion is a serious threat to our own national security,” said Walkinshaw.
AI-powered robots collide with the Trump administration’s thirst for data
As lawmakers weigh how best to position U.S. companies to compete with China, they must also grapple with the possibility that AI-powered robots could be hacked, manipulated or intentionally turned against the public.
Privacy and civil liberties experts have long expressed concerns about the use of robots in areas like policing, in certain military contexts and against American citizens.
The requests for more help from Washington comes at the same time the U.S. government, including the military and Department of Homeland Security, has become markedly more aggressive under the Trump administration about tracking data on Americans and using force against U.S. citizens involved in immigration operations.
Companies like Boston Dynamics sell their robots to manufacturing facilities, semiconductor fabricators, energy plants, first responders, and the U.S. Secret Service. But they also sell them to police departments and the U.S. military, and an early version of the company’s viral “BigDog” quadruped model was created through the Defense Advanced Research Projects Agency at the Department of Defense.
Last year, Immigrations and Customs Enforcement spent $78,000 for a Canadian robot that could perform similar tasks as Spot, another Boston Dynamics robot model, including deploying smoke bombs, according to Governing.
Last month, DHS finalized a $1 billion contract with Palantir to expand AI data analytics across the department to support immigration enforcement. The Coast Guard alone is investing $350 million in robotics and autonomous systems by 2028.
Congressional Democrats are currently blocking funding for DHS over its immigration and data collection policies.
The post U.S. robotics companies want federal help to keep Chinese robots out of America’s networks appeared first on CyberScoop.
Your AI doctor doesn’t have to follow the same privacy rules as your real one
AI apps are making their way into healthcare. It’s not clear that rigorous data security or privacy practices will be part of the package.
OpenAI, Anthropic and Google have all rolled out AI-powered health offerings from over the past year. These products are designed to provide health and wellness advice to individual users or organizations, helping to diagnose their illnesses, examine medical records and perform a host of other health-related functions.
OpenAI says that hundreds of millions of people already use ChatGPT to answer health and wellness questions, and studies have found that large language models can be remarkably proficient at medical diagnostics, with one paper calling their capabilities “superhuman” when compared to a human doctor.
But in addition to traditional cybersecurity concerns around how well these chatbots can protect personal health data, there are a host of questions around what kind of legal protections users would have around the personal medical data they share with these apps. Several health care and legal experts told CyberScoop that these companies are almost certainly not subject to the same legal or regulatory requirements – such as data protection rules under the Health Insurance Portability and Accountability Act (HIPAA) – that compel hospitals and other healthcare facilities to ensure protection of your data.
Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, said offering the same or similar data protections as part of a terms of service agreement is markedly different from interacting with a regulated healthcare entity.
“On a federal level there are no limitations – generally, comprehensively – on non-HIPAA protected information or consumer information being sold to third parties, to data brokers,” she said.
She also pointed to data privacy concerns that stemmed from the bankruptcy and sale of genetic testing company 23andMe last year as a prime example of the dangers consumers face when handing over their sensitive health or biometric data to a unregulated entity.
In many cases, these AI health apps carry the same kind of security and privacy risks as other generative AI products: data leakage, hallucinations, prompt injections and a propensity to give confident but wrong answers.
Additionally, data breaches in the healthcare industry have become increasingly common over the past several years, even before the current AI boom. Healthcare organizations are frequent targets for hacking, phishing, and ransomware, and even though companies can be held legally responsible under HIPAA for failing to protect patient data, breaches still happen because many systems rely on outdated software, depend on numerous outside vendors, and struggle to keep up with the cost and complexity of strong cybersecurity.
Carter Groome, CEO of First Health Advisory, a healthcare and cybersecurity risk management consulting firm, said that beyond concerns over whether these tech companies can even reasonably promise to protect your health data, it’s also not clear their security protections are anything more than a company policy.
“They’re not mandated by HIPAA,” Groome said. “Organizations that are building apps, there’s a real gray area for any sort of compliance” with health care data privacy laws.
Privacy is especially important in health and medicine, both for protecting sensitive medical information and for building trust in the health system overall. That’s why hospitals, doctor’s offices, lab testing facilities and other associated entities have been subject to heightened laws and regulations around protecting patient records and other health data.
Laws like HIPAA require covered entities and their business associates to “maintain reasonable and appropriate administrative, physical, and technical safeguards for the security of certain individually identifiable health information.”
It also subjects companies to breach notification rules that force them to notify victims, the Department of Health and Human Services and in some cases the public when certain health data has been accessed, acquired, used or disclosed in a data breach.
Groome and Andrew Crawford, senior counsel at Center for Democracy and Technology’s Data and Privacy Project, said that tech companies like OpenAI, Anthropic and Google almost certainly would not be considered covered entities under HIPAA’s security rule, which according to HHS applies to health plans, clearinghouses, health care providers and business associates who transfer Electronic Protected Health Information (ePHI).
OpenAI and Anthropic do not claim that ChatGPT Health or Claude for Healthcare follow HIPAA. Anthropic’s web site describes Claude for Healthcare as “built on HIPAA-ready infrastructure,” while OpenAI’s page for its suite of healthcare-related enterprise products claims they “support” HIPAA compliance.
OpenAI, Anthropic and Google did not respond to a request for comment from CyberScoop.
That distinction means “that a number of companies not bound by HIPAA’s privacy protections will be collecting, sharing, and using peoples’ health data,” Crawford said in a statement to CyberScoop. “And since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”
Laws like HIPAA contain strong privacy protections for health data but are limited in scope and “meant to help the digitization of records, not stop tech companies from gathering your health data outside of the doctor’s office,” Geoghegan said.
As they expand into healthcare, tech companies like OpenAI, Anthropic, and Google have emphasized data security as a top priority in their product launches.
OpenAI said their health model uses an added layer of built encryption and isolation features to compartmentalize health conversations, as well as added features like multifactor authentication. And, like other OpenAI models, ChatGPT Health encrypts its data at rest and in transit, has a feature to delete chats within 30 days and promises your data won’t be used for AI training.
For uploading medical records, OpenAI said it is partnering with b.well, an AI-powered digital health platform that connects health data for U.S. patients. On its website, the company says “it uses a transparent, consumer-friendly privacy policy that lets users control and change data-sharing permissions at any time, does not sell personal data, and only shares it without permission in limited cases. It also voluntarily follows the CARIN Alliance Trust Framework and Code of Conduct—making it accountable to the FTC—and says it aims to meet or exceed HIPAA standards through measures like encryption, regular security reviews, and HITRUST and NIST CSF certifications, though it notes no system can fully eliminate cyber risk.
Legal experts say that when tech companies promise their AI products are “HIPAA compliant” or “HIPAA ready,” it’s often unclear whether these claims amount to anything more than a promise not to use health data irresponsibly.
These distinctions matter when it comes to personal health data. Geoghegan said it is not uncommon in some corners of the wellness industry for an unregulated business to ambiguously claim they are “HIPAA-compliant” to elude the fact that they aren’t legally bound by the regulations.
“Generally speaking, a lot of companies say they’re HIPAA compliant, but what they mean is that they’re not a HIPAA regulated entity, therefore they have no obligation,” said Geoghegan.
Groome suggested that AI companies are being “hyperbolic” in their commitment to security in an effort to assuage the concerns of privacy critics, noting that their product announcements contain “a comical level of how much they say they’re going to protect your information.”
An added wrinkle is that AI tools remain black boxes in some respects, with even their developers unable to fully understand or explain how they work. That kind of uncertainty, especially with healthcare data, can lead to bad security or privacy outcomes.
“It’s really shaky right now when a company comes out and says ‘we’re fully HIPAA compliant’ and I think what they’re doing is trying to give the consumer a false sense of trust,” said Groome.
Several sources told CyberScoop that despite these risks, they expect AI health apps to continue being widely used, in part because the traditional American healthcare system remains so expensive.
AI tools – by contrast – are convenient, immediate and cost effective. While people like Geoghegan and Groome have said they are sympathetic to the pressures that push people towards these apps, the tradeoffs are troubling.
“A lot of this stems from the fact that care is inaccessible, it’s hard to get and it’s expensive, and there are many reasons why people don’t trust in health care provisions,” said Geoghegan. “But the solution to that care being inaccessible cannot be relying on big tech and billionaire’s products. We just can’t trust [them] to have our best health interest in mind.”
The post Your AI doctor doesn’t have to follow the same privacy rules as your real one appeared first on CyberScoop.
Predator bots are exploiting APIs at scale. Here’s how defenders must respond.
The rise of malicious bots is changing how the internet operates, underscoring the need for stronger safeguards that keep humans firmly in control. Bots now account for more than half of global web traffic, and a new class of “predator bots” has emerged, unleashing self-learning programs that adapt in real time, mimic human behavior, and exploit APIs and business logic in order to steal data, scalp goods, and hijack transactions.
The economic fallout is staggering: bots and API attacks drain up to $186 billion annually, driven by credential theft, scalping, and fake account creation that fuel large-scale fraud and distort online markets. This represents one of the fastest-growing forms of cyber-enabled economic harm, and it’s happening mostly out of sight.
Security teams can’t afford to let hackers have the upper hand with automation. Addressing the growing bot crisis requires a deep knowledge of APIs and their vulnerabilities, as well as the ability to leverage automation in ways that match and counter attackers’ growing arsenals.
The new bot economy
Over the last few years, AI has accelerated malicious automation from simple scripts to adaptive systems that evolve in real time. Today’s predator bots blend seamlessly into normal traffic patterns, dramatically increasing the volume of legitimate-appearing bot traffic and making it harder for security teams to spot.
The influx of bots has led to an unprecedented scale credential theft, account takeover, scraping, scalping, and promotion fraud. With malicious bots now accounting for roughly 37% of all web traffic, security teams are left feeling like they’re playing a giant game of bot whack-a-mole.
Predator bots are not only causing financial impact; they’re also slowly eroding customer confidence and overall societal trust in our digital infrastructure. These bots are targeting every sector, from financial services to citizen services and beyond, further chipping away at public trust in critical infrastructure capabilities. Even small disruptions can now be amplified through automation, turning minor weaknesses into large-scale outages or fraud events.
As predator bots continue to grow in influence and scale, defenders are left with a shrinking window of time to secure today’s digital infrastructure for tomorrow’s customers.
APIs are the front line
APIs are the fabric that connects the internet, powering functions like identity management, payments, checkout carts, inventory, and customer access. The very essence of how APIs connect the internet is also what makes them the most vulnerable targets. While APIs represent roughly 14% of attack surfaces, they attract 44% of advanced bot traffic, highlighting the imbalance of risk.
Predator bots differ from attacks focused on code vulnerabilities, as they exploit business logic to reshape workflows against organizations. This manifests in API-driven abuse that exploits legitimate workflows, from manipulating checkout flows to large-scale data abuse. As AI enables both high-volume brute force attacks and low-and-slow stealth attacks, security teams are quickly realizing traditional defenses are no longer up to par.
With hackers zeroing in on API abuse to drive predator bot attacks, visibility, classification, and behavior monitoring are now core to digital trust. Shadow APIs and forgotten endpoints only widen the attack surface, giving predators more places to hide. Shining a light on AI-powered bots requires layered defense strategies that combine human insight with advanced, adaptive technology.
Defending at machine speed
As automated attacks continue to mature and evolve, traditional defense tactics like static rules, CAPTCHAs, and IP blocking can no longer keep pace. To defend against bots at machine speed, security teams must pair modern defense tactics rooted in autonomy and agility with human expertise.
Bots don’t act in isolation, and neither should security teams. Autonomous controls can take over detection and response, automatically flagging suspicious bot behavior and enforcing protections like adaptive MFA. This allows human analysts to focus on high-value adds like threat modeling and strategic risk reduction.
Security teams should first start with a complete API discovery, including endpoints, to ensure they know their digital environment inside and out. Next, teams must adopt proactive security measures like behavioral bot detection, MFA, machine-speed anomaly detection, and business logic monitoring. These measures ensure that bots are caught before damage can be inflicted.
Today’s defense must operate, to some degree, like attacks: continuous, context-aware, and capable of adapting in real time. By augmenting human capabilities with autonomous tools, security teams shift from being overwhelmed and responding to threats reactively to operating proactively and intelligently. Security cannot afford to lag behind; it must evolve in lockstep with the threats teams face.
Automation is the new battleground
As AI accelerates attack automation, defenders need modern, AI-powered tools that match the speed of attackers and free security teams to concentrate on the complex, judgement-driven work that machines can’t replicate.
The future is about more than keeping bots out. Security’s next phase will be defined by behavior-driven insight, intent-based detection, and defense at machine speed.
Tim Chang is the global vice president of application security at Thales.
The post Predator bots are exploiting APIs at scale. Here’s how defenders must respond. appeared first on CyberScoop.
At the AI movies with Will
The AI hype bubble
House GOP leaders seek government probe, restrictions on Chinese-made tech
A Commerce Department office should investigate Chinese government-connected products in more than a dozen emerging industries for security threats, a group of House GOP committee leaders said in a letter they released Wednesday.
In the missive, the lawmakers said the Office of Information and Communications Technology and Services has the power to both investigate and restrict those products in areas like artificial intelligence and energy generation.
China, they wrote, has already demonstrated that it views information technology as a battlefield with its cyberattacks on the United States.
“A compromised power grid, an infiltrated telecommunications network, or a manipulated industrial control system can pose as great a threat as a kinetic military strike,” the House members said. “The fusion of digital capabilities with critical infrastructure has whittled away geographic borders, as connected infrastructure or products can be controlled or updated by entities in another country.
“Without a concerted effort to create a secure technology ecosystem from the very beginning of each supply chain, our adversaries will continue to exploit our dependence on their technology to undermine U.S. economic and military stability,” they continued.
The lawmakers signing the letter were House Homeland Security Chairman Andrew Garbarino of New York; Committee on China Chairman John Moolenaar of Michigan; Foreign Affairs Chairman Brian Mast of Florida, Intelligence Chairman Rick Crawford of Arkansas; and Bill Huizenga of Michigan, who chairs the Foreign Affairs Subcommittee on South and Central Asia.
Some of the industries and companies on the lawmakers’ list have already drawn attention from the U.S. government, including from the Commerce Department. For instance, the departments of Commerce, Defense and Justice reportedly opened investigations last year into the router-manufacturer TP-Link of China. More than a half-dozen agencies are said to support a ban on TP-Link Systems of Irvine, Calif., spun off from the Chinese company.
TP-Link Systems disputes allegations that it poses a national security threat.
Other products mentioned in the GOP members’ letter include industrial control systems, robotics, cameras, chip design software, drones and tools necessary for semiconductor production.
The Commerce Department did not immediately respond to requests for comment on the GOP letter. The government shutdown has led some agencies to stop responding to emails.
The Trump administration is in the midst of a prolonged and wide-ranging battle over trade with Beijing, one that includes potential curbs on exports to China made with U.S. software and Nvidia’s most advanced chips. Chinese-made products and their potential impacts on cybersecurity have sparked widespread concerns.
The post House GOP leaders seek government probe, restrictions on Chinese-made tech appeared first on CyberScoop.
Crafting the Perfect Prompt: Getting the Most Out of ChatGPT and Other LLMs

| Bronwen Aker // Sr. Technical Editor, M.S. Cybersecurity, GSEC, GCIH, GCFE Go online these days and you will see tons of articles, posts, Tweets, TikToks, and videos about how […]
The post Crafting the Perfect Prompt: Getting the Most Out of ChatGPT and Other LLMs appeared first on Black Hills Information Security, Inc..