Reading view

There are new articles available, click to refresh the page.

Maryland pharmacist indicted on unauthorized computer access related to U. Maryland Medical Center

From the U.S. Attorney’s Office, District of Maryland: A Maryland man is facing federal indictment stemming from an unauthorized computer access scheme involving a Maryland medical system. Matthew Bathula, 41, of Clarksville, is charged with two counts of unauthorized access to a protected computer, and one count of aggravated identity theft while working as a...

Source

Almost one year after discovery, Sandhills Medical Foundation notifies 169,017 people affected by a cyberattack

On April 28, Sandhills Medical Foundation in South Carolina notified the Maine Attorney General’s Office of a data breach that affected a total of 169,017 people, only 8 of whom are Maine residents. Their notification to the state and those affected comes almost a year to the day since they first experienced the breach. According...

Source

OCR Announces Settlements of Four Ransomware Investigations that Affected Over 427,000 Individuals

Yesterday, the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR) announced settlements with four regulated entities following separate ransomware investigations under HIPAA’S Security Rule. For those keeping count: the resolutions announced mark 19 completed investigations from ransomware breaches and 13 completed investigations in OCR’s Risk Analysis Initiative. The settlements follow...

Source

Outside FDA, Inside the Crosshairs: Cybersecurity Risks for General Wellness and Fitness Products

Troutman Pepper Locke writes: In Part One of this series, we discussed how wellness products sit at the intersection of Food and Drug Administration (FDA), Health Insurance Portability and Accountability Act (HIPAA), Federal Trade Commission (FTC), and state privacy/breach laws. In Part Two, we analyzed FDA’s 2026 General Wellness guidance and what it means for device-level cybersecurity expectations....

Source

OCR Releases Risk Management Video

From HHS OCR: This video presentation is intended to raise awareness and provide practical education to HIPAA covered entities and business associates of the HIPAA Security Rule’s Risk Management requirement. Like risk analysis, effective risk management is an essential component of both HIPAA Security Rule compliance and broader cybersecurity preparedness. Risk management is a critical step not only for...

Source

Woodfords Family Services notifying patients and families about 2024 ransomware attack

A notice by Woodfords Family Services in Maine caught my eye because the name sounded familiar. They provide support services for people with disabilities and their families. On March 27, 2026, they issued a notice: What Happened? On April 8, 2024, we discovered suspicious activity within our network. We took steps to secure our environment and...

Source

Florida Medicare members’ data exposed as Mirra Health improperly outsourced records overseas

Skyler Shepard reports: State investigators say Mirra Health jeopardized the safety of thousands of Floridians by sharing their sensitive health data with unauthorized companies overseas. Florida Insurance Commissioner Mike Yaworsky suspended Mirra Health Care LLC on Tuesday after investigators found the company sent private medical information to unlicensed companies in India and the Philippines. Mirra Health handles important claims...

Source

Bell Ambulance data breach impacted over 238,000 people

Pierluigi Paganini reports: Nearly 238,000 individuals are impacted by a February 2025 Bell Ambulance data breach. Bell Ambulance is a U.S.-based emergency medical services provider offering ambulance transport, paramedic care, and patient support. It serves communities with urgent medical response, interfacility transfers, and non-emergency transport, focusing on patient safety and timely care. On February 13,...

Source

Insightin Health discloses its second data security incident in two years (1)

On March 4, 2026, Insightin Health, a vendor that provides data analytics and technology solutions to healthcare payers, submitted a breach notification letter to the California Attorney General’s Office. Of note, it stated, in part: What Happened? Insightin used a file-transfer tool called GoAnywhere to move data between Insightin and your health plan. On September...

Source

HHS’ Office for Civil Rights Settles HIPAA Investigation of MMG Fusion, LLC Breach Affecting 15 Million Individuals

From a  press release by HHS OCR: Today, the U.S. Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) announced a settlement with MMG Fusion, LLC (MMG), a Maryland software company, concerning potential violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy, Security, and Breach Notification Rules. MMG...

Source

Data from Insight Hospital and Medical Center Leaked on Dark Web

On or about January 26, 2026, Insight Hospital and Medical Center (“Insight”) in Chicago issued a substitute notice. It states that in September 2025, Insight learned of unusual activity within its network. An investigation subsequently determined that an unauthorized individual accessed the network between August 22, 2025 and September 11, 2025. As of the date...

Source

Increasingly, HIPAA Can’t Stop AI from De-Anonymizing Patient Data

Martin Anderson reports: Even after hospitals strip out names and zip codes, modern AI can sometime still work out who patients are. Great news for insurance companies; not so much for healthcare recipients. New research from New York University finds that US patients’ medical notes, stripped of names and other HIPAA identifiers, can expose patients to re-identification....

Your AI doctor doesn’t have to follow the same privacy rules as your real one

AI apps are making their way into healthcare. It’s not clear that rigorous data security or privacy practices will be part of the package.

OpenAI, Anthropic and Google have all rolled out AI-powered health offerings from over the past year. These products are designed to provide health and wellness advice to individual users or organizations, helping to diagnose their illnesses, examine medical records and perform a host of other health-related functions.

OpenAI says that hundreds of millions of people already use ChatGPT to answer health and wellness questions, and studies have found that large language models can be remarkably proficient at medical diagnostics, with one paper calling their capabilities “superhuman” when compared to a human doctor.

But in addition to traditional cybersecurity concerns around how well these chatbots can protect personal health data, there are a host of questions around what kind of legal protections users would have around the personal medical data they share with these apps. Several health care and legal experts told CyberScoop that these companies are almost certainly not subject to the same legal or regulatory requirements – such as data protection rules under the Health Insurance Portability and Accountability Act (HIPAA) – that compel hospitals and other healthcare facilities to ensure protection of your data.

Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, said offering the same or similar data protections as part of a terms of service agreement is markedly different from interacting with a regulated healthcare entity. 

“On a federal level there are no limitations – generally, comprehensively – on non-HIPAA protected information or consumer information being sold to third parties, to data brokers,” she said. 

She also pointed to data privacy concerns that stemmed from the bankruptcy and sale of genetic testing company 23andMe last year as a prime example of the dangers consumers face when handing over their sensitive health or biometric data to a unregulated entity.

In many cases, these AI health apps carry the same kind of security and privacy risks as other generative AI products: data leakage, hallucinations, prompt injections and a propensity to give confident but wrong answers.

Additionally, data breaches in the healthcare industry have become increasingly common over the past several years, even before the current AI boom. Healthcare organizations are frequent targets for hacking, phishing, and ransomware, and even though companies can be held legally responsible under HIPAA for failing to protect patient data, breaches still happen because many systems rely on outdated software, depend on numerous outside vendors, and struggle to keep up with the cost and complexity of strong cybersecurity.

Carter Groome, CEO of First Health Advisory, a healthcare and cybersecurity risk management consulting firm, said that beyond concerns over whether these tech companies can even reasonably promise to protect your health data, it’s also not clear their security protections are anything more than a company policy.

“They’re not mandated by HIPAA,” Groome said. “Organizations that are building apps, there’s a real gray area for any sort of compliance” with health care data privacy laws.

Privacy is especially important in health and medicine, both for protecting sensitive medical information and for building trust in the health system overall. That’s why hospitals, doctor’s offices, lab testing facilities and other associated entities have been subject to heightened laws and regulations around protecting patient records and other health data.

Laws like HIPAA require covered entities and their business associates to “maintain reasonable and appropriate administrative, physical, and technical safeguards for the security of certain individually identifiable health information.”

It also subjects companies to breach notification rules that force them to notify victims, the Department of Health and Human Services and in some cases the public when certain health data has been accessed, acquired, used or disclosed in a data breach.

Groome and Andrew Crawford, senior counsel at Center for Democracy and Technology’s Data and Privacy Project, said that tech companies like OpenAI, Anthropic and Google almost certainly would not be considered covered entities under HIPAA’s security rule, which according to HHS applies to health plans, clearinghouses, health care providers and business associates who transfer Electronic Protected Health Information (ePHI). 

OpenAI and Anthropic do not claim that ChatGPT Health or Claude for Healthcare follow HIPAA. Anthropic’s web site describes Claude for Healthcare as “built on HIPAA-ready infrastructure,” while OpenAI’s page for its suite of healthcare-related enterprise products claims they “support” HIPAA compliance.

OpenAI, Anthropic and Google did not respond to a request for comment from CyberScoop. 

That distinction means “that a number of companies not bound by HIPAA’s privacy protections will be collecting, sharing, and using peoples’ health data,” Crawford said in a statement to CyberScoop. “And since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”

Laws like HIPAA contain strong privacy protections for health data but are limited in scope and “meant to help the digitization of records, not stop tech companies from gathering your health data outside of the doctor’s office,” Geoghegan said.

As they expand into healthcare, tech companies like OpenAI, Anthropic, and Google have emphasized data security as a top priority in their product launches.

OpenAI said their health model uses an added layer of built encryption and isolation features to compartmentalize health conversations, as well as added features like multifactor authentication. And, like other OpenAI models, ChatGPT Health encrypts its data at rest and in transit, has a feature to delete chats within 30 days and promises your data won’t be used for AI training.

For uploading medical records, OpenAI said it is partnering with b.well, an AI-powered digital health platform that connects health data for U.S. patients. On its website, the company says “it uses a transparent, consumer-friendly privacy policy that lets users control and change data-sharing permissions at any time, does not sell personal data, and only shares it without permission in limited cases. It also voluntarily follows the CARIN Alliance Trust Framework and Code of Conduct—making it accountable to the FTC—and says it aims to meet or exceed HIPAA standards through measures like encryption, regular security reviews, and HITRUST and NIST CSF certifications, though it notes no system can fully eliminate cyber risk.

Legal experts say that when tech companies promise their AI products are “HIPAA compliant” or “HIPAA ready,” it’s often unclear whether these claims amount to anything more than a promise not to use health data irresponsibly. 

These distinctions matter when it comes to personal health data. Geoghegan said it is not uncommon in some corners of the wellness industry for an unregulated business to ambiguously claim they are “HIPAA-compliant” to elude the fact that they aren’t legally bound by the regulations.

“Generally speaking, a lot of companies say they’re HIPAA compliant, but what they mean is that they’re not a HIPAA regulated entity, therefore they have no obligation,” said Geoghegan.

Groome suggested that AI companies are being “hyperbolic” in their commitment to security in an effort to assuage the concerns of privacy critics, noting that their product announcements contain “a comical level of how much they say they’re going to protect your information.”

An added wrinkle is that AI tools remain black boxes in some respects, with even their developers unable to fully understand or explain how they work. That kind of uncertainty, especially with healthcare data, can lead to bad security or privacy outcomes.

“It’s really shaky right now when a company comes out and says ‘we’re fully HIPAA compliant’ and I think what they’re doing is trying to give the consumer a false sense of trust,” said Groome.

Several sources told CyberScoop that despite these risks, they expect AI health apps to continue being widely used, in part because the traditional American healthcare system remains so expensive.  

AI tools – by contrast – are convenient, immediate and cost effective. While people like Geoghegan and Groome have said they are sympathetic to the pressures that push people towards these apps, the tradeoffs are troubling.

“A lot of this stems from the fact that care is inaccessible, it’s hard to get and it’s expensive, and there are many reasons why people don’t trust in health care provisions,” said Geoghegan. “But the solution to that care being inaccessible cannot be relying on big tech and billionaire’s products. We just can’t trust [them] to have our best health interest in mind.”

The post Your AI doctor doesn’t have to follow the same privacy rules as your real one appeared first on CyberScoop.

Bipartisan health care cybersecurity legislation returns to address a cornucopia of issues

A bipartisan group of senators are looking to tackle health care cybersecurity by reviving legislation that would update regulations and guidelines, authorize grants, offer training and clarify federal agency roles.

It’s a subset of cybersecurity where Congress hasn’t enacted any sweeping changes to date. The resurrected Health Care Cybersecurity and Resiliency Act from Health, Education Labor and Pension Committee Chairman Bill Cassidy, R-La., and his colleagues on both sides of the aisle emerges from a 2023 bipartisan health care cybersecurity working group.

Cassidy and his cosponsors — Mark Warner, D-Va., Maggie Hassan, D-N.H., and John Cornyn, R-Tex. — first introduced the bill in late November last year, with little time left in the session to take action on it before Congress adjourned at the beginning of 2025.

“Cyberattacks in the health care sector can have a wide range of devastating consequences, from exposing private medical information to disrupting care in ERs — and it can be particularly difficult for medical providers in rural communities with fewer resources to prevent and respond to these attacks,” Hassan said in a news release Thursday.

The legislation aspires to improve coordination between the Department of Health and Human Services and the Cybersecurity and Infrastructure Security Agency, with steps like directing HHS to work with CISA state coordinators to provide training to health care owners and operators.

It would clarify HHS’s responsibilities and give it additional responsibilities, such as directing it to develop a cybersecurity incident response plan. It also requires HHS to update Health Insurance Portability and Accountability Act (HIPAA) regulations for health care identities to use modern cybersecurity practices, issue guidance for rural health clinics on breach prevention.

And it authorizes a five-year grant program at HHS for select health care entities, like academic health and cancer centers, although it doesn’t specify a dollar amount.

Some of those goals are similar to provisions from other health care cybersecurity bills that haven’t become law, some of which emerged after the Change Healthcare ransomware attack that led to the biggest breach of health care data ever reported to federal regulators.

“Patients deserve absolute confidence that their sensitive medical data stored online is protected and shielded from cybersecurity breaches or ransomware attacks,” Cornyn said.

The post Bipartisan health care cybersecurity legislation returns to address a cornucopia of issues appeared first on CyberScoop.

Let’s Talk About Direct Object References

Kelsey Bellew // Maybe you don’t know what Direct Object References mean, if you Google it, you’d get this: This description uses the words “direct”, “object” and “reference” to describe a […]

The post Let’s Talk About Direct Object References appeared first on Black Hills Information Security, Inc..

❌