Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NIST Publishes Guide for Protecting ICS Against USB-Borne Threats

1 October 2025 at 07:16

NIST Special Publication 1334 focuses on reducing cybersecurity risks associated with the use of removable media devices in OT environments.

The post NIST Publishes Guide for Protecting ICS Against USB-Borne Threats appeared first on SecurityWeek.

SEC Charges Four Companies Impacted by Data Breach with Misleading Cyber Disclosures

On October 22, 2024, the SEC announced settled enforcement actions charging four companies with making materially misleading disclosures regarding cybersecurity risks and intrusions. These cases mark the first to bring charges against companies who were downstream victims of the well-known cyber-attack on software company SolarWinds. The four companies were providers of IT services and digital communications products and settled the charges for amounts ranging from $990,000 to $4 million.

In 2023, the SEC sued SolarWinds and its Chief Information Security Officer for allegedly misleading disclosures and deficient controls. Most of the SEC’s claims in that case were dismissed by a judge in the Southern District of New York, in part because the judge ruled that SolarWinds’ post-incident disclosures did not misleadingly minimize the severity of the intrusion. This new round of charges indicates the SEC’s intent to continue to enforce disclosure and reporting requirements surrounding cybersecurity breaches. The SEC’s recent charges focus on the companies’ continued use of generic and hypothetical language following significant data breaches, as well as allegations of downplaying the severity of the breaches by omitting material information about their nature and extent. Public companies should carefully consider the lessons from these actions when making disclosures following a cybersecurity breach.  

Background

According to the SEC’s allegations, which the companies neither admitted nor denied, in December 2020, each of the four companies charged last week learned that its systems had been affected by the SolarWinds data breach. Public reporting at the time indicated that the breach was likely performed by a state-sponsored threat actor. Each of the companies performed investigations of the breach, determining that the threat actor had been active in their systems for some period of time and accessed certain company or customer information.[1]

The SEC brought negligent fraud charges against all four companies, charging two primary types of materially misleading disclosures. Two companies, Check Point[2] and Unisys,[3] were charged because the SEC believed their post-breach risk factor disclosures—containing generic and hypothetical language about the risk of cybersecurity breaches similar to their pre-breach disclosures—were misleading given that the companies had become aware of the actual SolarWinds-related breaches. The SEC alleged that the other two companies, Avaya[4] and Mimecast,[5] while they did make specific disclosures that they had been affected by cybersecurity breaches, misleadingly omitted details that the SEC asserted would be material to investors. The SEC noted that all four companies were in the information technology industry, with large private and government customers, and therefore their reputation and ability to attract and retain customers would be affected by disclosure of a data breach.

 The Charges

There were two categories of charges.

Charges for disclosing hypothetical cyber risks in wake of actual cyber attack. The SEC has repeatedly brought charges against companies for allegedly using generic and/or hypothetical language in their risk factors after a known data breach.[6] That trend has continued with the recent actions against Check Point and Unisys.

i. Check Point

Check Point’s Form 20-F disclosures in 2021 and 2022 stated, “We regularly face attempts by others to gain unauthorized access…” and “[f]rom time to time we encounter intrusions or attempts at gaining unauthorized access to our products and network. To date, none have resulted in any material adverse impact to our business or operations.”[7] These filings were virtually unchanged before and after the data breach. The SEC alleged that these risk disclosures were materially misleading because the company’s risk profile materially changed as a result of the SolarWinds compromise-related activity for two reasons: the threat actor was likely a nation-state and the threat actor “persisted in the network unmonitored for several months and took steps, including deployment and removal of unauthorized software and attempting to move laterally” in the company’s environment.[8]

ii. Unisys

The company’s risk factors in its Form 10-Ks following the breach were substantially unchanged from 2019. The risk factor language was hypothetical: cyberattacks “could … result in the loss … or the unauthorized disclosure or misuse of information…” and “if our systems are accessed ….”[9] The SEC alleged that hypothetical language is insufficient when the company is aware that a material breach occurred. The SEC also alleged that the company did not maintain adequate disclosure controls and procedures because they had no procedures to ensure that, in the event of a known cybersecurity incident, information was escalated to senior management, which in this case did not happen for several months. The SEC’s order also alleged that the company’s investigative process after the breach “suffered from gaps that prevented it from identifying the full scope of the compromise,” and that these gaps constituted a material change to the company’s risk profile that should have been disclosed.[10]

Charges for allegedly failing to disclose material information. Two of the charged companies did disclose that their systems had been affected by suspicious activity, but the SEC nevertheless found fault with those disclosures.

i. Avaya

In its Form 10-Q filed two months after learning of the breach, the company disclosed that it was investigating suspicious activity that it “believed resulted in unauthorized access to our email system,” with evidence of access to a “limited number of Company email messages.”[11] The SEC alleged that these statements were materially misleading because they “minimized the compromise and omitted material facts” that were known to the company “regarding the scope and potential impact of the incident,”[12] namely, omitting: (i) that the intrusions were likely the work of a state actor, and (ii) that the company had only been able to access 44 of the 145 files compromised by the threat actor and therefore could not determine whether these additional files contained sensitive information.[13]

ii. Mimecast

In its Form 8-Ks filed in the months after learning of the breach, Mimecast disclosed that an authentication certificate had been compromised by a sophisticated threat actor, that a small number of customers were targeted, that the incident was related to SolarWinds, and that some of the company’s source code had been downloaded. The company stated that the code was “incomplete and would be insufficient to build and run” any aspect of the company’s service.[14] The SEC alleged that these statements were materially misleading “by providing quantification regarding certain aspects of the compromise but not disclosing additional material information on the scope and impact of the incident,” such as the fact that the threat actor had accessed a database containing encrypted credentials for some 31,000 customers and another database with systems and configuration information for 17,000 customers, and by not disclosing that the threat actor had exported source code amounting to more than half of the source code of the affected projects, or information about the importance of that code.[15]

Dissenting Statement

The two Republican Commissioners, Hester Peirce and Mark Uyeda, voted against the actions and issued a dissenting statement accusing the Commission of “playing Monday morning quarterback.”[16] The dissenters noted two key issues across the orders. First, the dissenters viewed the cases as requiring disclosure of details about the cybersecurity incident itself, despite previous Commission statements that disclosures should instead be focused on the “impact” of the incident.[17] Second, the dissenters argued that many of the statements the SEC alleged to be material would not be material to the reasonable investor, such as the specific percentage of code exfiltrated by the threat actor.[18]  

The SEC Is Not Backing Off After SolarWinds

These enforcement actions come months after the Southern District of New York rejected several claims the SEC brought against SolarWinds for the original breach.[19] The recent actions show that the SEC is not backing away from aggressively reviewing incident and other related cybersecurity disclosures. Notably, the SEC did not allege that any of the companies’ cybersecurity practices violated the Exchange Act’s internal controls provision.  In an issue of first impression, the SolarWinds court held that the internal controls provisions focus on accounting controls and do not encompass the kind of cyber defenses at issue in that case.  It is not clear whether the absence of such charges here represents the SEC adopting a new position after the SolarWinds ruling, or rather a reflection of these cases involving different cybersecurity and intrusions. The SEC did allege failure to maintain proper disclosure controls in one of the four new orders, which was another allegation rejected by the SolarWinds court as insufficiently pled.[20] Moreover, the SolarWinds court dismissed claims that the company had misled its investors by making incomplete disclosures after its cyber intrusion, finding that the company adequately conveyed the severity of the intrusion and that any alleged omissions were not material or misleading.  While the dissenters questioned whether the allegedly misleading disclosures here were any different than those in SolarWinds, at a minimum these cases show that the SEC will continue to closely scrutinize post-incident disclosures, notwithstanding its loss in SolarWinds.

Takeaways

There are several takeaways from these charges.

  • The SEC is signaling an aggressive enforcement environment and continuing to bring claims against companies for deficient disclosure controls, despite similar charges being rejected in SolarWinds. The Unisys order shows that the SEC will continue to pursue disclosure controls charges where, in its view, a company did not adequately escalate incidents to management, consider the aggregate impact of related incidents, or adopt procedures to guide materiality determinations, among other things.
  • The SEC will reliably bring charges against companies that use generic or hypothetical risk factor language to describe the threat of cybersecurity incidents when the company’s “risk profile changed materially”[21] due to a known breach.
  • The SEC will give heightened scrutiny to disclosures by companies in sectors such as information technology and data security, because in the SEC’s view cybersecurity breaches are more likely to affect the reputation and ability to attract customers for these types of companies.
  • Companies should take care in crafting disclosures about the potential impact of cybersecurity breaches, including in Form 8-K and risk factor disclosure, and consider factors such as:
    • Whether the threat actor is likely affiliated with a nation-state.
    • Whether, or the extent to which, the threat actor persisted in the company’s environment.
    • If the company seeks to quantify the impact of the intrusion, such as by the number of files or customers affected, the SEC will scrutinize whether the company selectively disclosed quantitative information in a misleading way.
    • Whether the company should disclose not only the number of files or amount customer data compromised, but the importance of the files or data and the uses that can be made of them.
    • If the company quantifies the impact of the intrusion but is aware of gaps in its investigation or in the available data that mean the severity of the impact could have been worse, the SEC may consider it misleading not to disclose those facts.

[1] For information on the four orders, See Press Release, SEC Charges Four Companies With Misleading Cyber Disclosures, SEC, https://www.sec.gov/newsroom/press-releases/2024-174.

[2] Check Point Software Technologies Ltd., Securities Act Release No. 11321, Exchange Act release No. 101399, SEC File No. 3-22270 (Oct. 22, 2024).

[3] Unisys Corporation, Securities Act Release No. 11323, Exchange Act Release No. 101401, SEC File No. 3-22272 (Oct. 22, 2024).

[4] Avaya Holdings Corp., Securities Act Release No. 11320, Exchange Act Release No. 101398, SEC File No. 3-22269 (Oct. 22, 2024).

[5] Mimecast Limited, Securities Act Release No. 11322, Exchange Act Release No. 101400, SEC File No. 3-22271 (Oct 22, 2024).

[6] Press Release, Altaba, Formerly Known as Yahoo!, Charged With Failing to Disclose Massive Cybersecurity Breach; Agrees To Pay $35 Million, SEC,https://www.sec.gov/newsroom/press-releases/2018-71; Press Release, SEC Charges Software Company Blackbaud Inc. for Misleading Disclosures About Ransomware Attack That Impacted Charitable Donors, SEC, https://www.sec.gov/newsroom/press-releases/2023-48.

[7] Check Point, supra note 2, at 2–4.

[8] Id.

[9] Unisys Corporation, supra note 3, at 6.

[10] Id. at 5–7.

[11] Avaya Holdings Corp, supra note 4, at 4.

[12] Id. at 2.

[13] Id. at 4.

[14] Mimecast Limited, supra note 5, at 4.

[15] Id.

[16] Statement, Comm’rs Peirce and Uyeda, Statement Regarding Administrative Proceedings Against SolarWinds Customers (Oct. 22, 2024), https://www.sec.gov/newsroom/speeches-statements/peirce-uyeda-statement-solarwinds-102224.

[17] Id.

[18] Id.

[19] See Cleary Alert Memo, SDNY Court Dismisses Several SEC Claims Against SolarWinds and its CISO (July 26, 2024).

[20] Id.

[21] Unisys Corporation, supra note 3,at 5.

New York Department of Financial Services Issues Guidance on Cybersecurity Risks Arising from Artificial Intelligence

Last week, the New York Department of Financial Services (“DFS”) issued guidance addressed to executives and information security personnel of entities regulated by DFS to assist them in understanding and assessing cybersecurity risks associated with the use of artificial intelligence (“AI”), and implementing appropriate controls to mitigate such risks (the “Guidance”).[1] In particular, and to address inquiries received by DFS regarding AI’s impact on cyber risk, the Guidance is intended is to explain how the framework set forth in DFS’ Cybersecurity Regulation (23 NYCRR Part 500) should be used to assess and address such risks.

Below, we provide a high-level overview of the cyber risks identified by DFS related to the use of AI as well as the mitigating controls DFS recommends covered entities adopt to minimize the likelihood and impact of such risks.  Even for entities that are not regulated by DFS, the Guidance provides a roadmap for how other regulators may view AI-related cyber risks. 

Cybersecurity Risks Related to the Use of AI.  The Guidance identifies two categories of risks specific to cybersecurity posed by an organization’s deployment of AI:

  • Risks caused by threat actors’ use of AI (e.g., AI-enabled social engineering and AI-enhanced cybersecurity attacks):

AI has enabled threat actors to create highly personalized and sophisticated social engineering attacks that are more convincing, and therefore more successful. In particular, threat actors are using AI to create audio, video and text “deepfakes” that target specific individuals, convincing employees to disclose sensitive information about themselves and their employers or share credentials enabling access to their organization’s information systems and nonpublic information. Deepfakes have also been used to mimic an individual’s appearance or voice to circumvent IT verification procedures as well as biometric verification technology.

AI has also allowed threat actors to amplify the “potency, scale, and speed of existing types of cyberattacks.” For example, AI can be used to more efficiently identify and exploit security vulnerabilities, allowing broader access to protected information and systems at a faster rate. It can also accelerate the development of new malware variants and enhance ransomware such that it can bypass defensive security controls, evading detection. Even threat actors who are not technically skilled may now be able to launch attacks using AI products and services, resulting in a potential increase in the number and severity of cyberattacks.

  • Risks caused by a covered entity’s use or reliance upon AI.

Products that use AI require the collection and processing of substantial amounts of data, including non-public information (“NPI”). Covered entities that develop or deploy AI are at risk because threat actors have a greater incentive to target these entities to extract NPI for malicious purposes and/or financial gain. AI tools that require storage of biometric data, like facial and fingerprint recognition, pose a great risk as stolen biometric data can be used to generate deepfakes, imitate authorized users, bypass multi-factor authentication (“MFA”) and gain access to NPI.

Working with third party vendors in gathering data for AI-powered tools exposes organizations to additional vulnerabilities. For example, if a covered entities’ vendors or suppliers are compromised in a cybersecurity incident, its NPI could be exposed and become a gateway for broader attacks on its network.

Measures to Mitigate AI-related Threats

Using its Cybersecurity Regulation as a framework, DFS suggests a number of controls and measures to help entities combat the aforementioned AI-related cybersecurity risks. Such controls include:

  • Designing cybersecurity risk assessments that account for AI-related risks in the use of AI by the covered entity and its vendors and suppliers;
  • Applying robust access controls to combat deepfakes and other AI-enhanced social engineering attacks;[2]
  • Maintaining defensive cybersecurity programs to protect against deepfakes and other AI threats;
  • Implementing third party vendor and supplier policies and management procedures that include due diligence on threats facing such vendors and suppliers from the use of AI and how such threats, if exploited, could impact the covered entity;
  • Enforcing data minimization policies to limit NPI a threat actor can access in case MFA fails; and
  • Training AI development personnel on securing and defending AI systems as well as other personnel on drafting queries to avoid disclosing NPI.

Conclusion

As AI continues to evolve, so too will AI-related cybersecurity risks, meaning it is of critical importance that all companies are proactive in identifying, assessing and mitigating the risks applicable to its business. To ensure speedy detection of, and response to, such threats, and attempt to avoid regulatory scrutiny or enforcement, covered entities should review, and where necessary update, its existing cybersecurity policies and procedures and implement mitigating controls using the Cybersecurity Regulation as a framework in line with DFS’ Guidance.


[1] A copy of the DFS Guidance can be found here.

[2] Notably, DFS encourages entities to consider using authentication factors that can withstand AI-manipulated deepfakes, and other AI-enhanced attacks by avoiding authentication via SMS text, voice or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys. Additionally, DFS recommends using technology with liveness detection or texture analysis, or requiring authentication via more than one biometric modality at the same time to protect against AI impersonation.

DOJ Brings Lawsuit Against TikTok Over Alleged Violations of the Children’s Online Privacy Protection Act

Following on the heels of major developments coming out of the Senate last week to advance privacy protections for children online, the Department of Justice (“DOJ”) officially filed a lawsuit on Friday against TikTok, Inc., its parent company, ByteDance, and certain affiliates (collectively, “TikTok”), over alleged violations of the Children’s Online Privacy Protection Act (“COPPA”) and its implementing rule (the “COPPA Rule”) as well as an existing FTC 2019 consent order (the “2019 Order”) alleging violations of the same.[1]

After an investigation by the Federal Trade Commission (“FTC”) into TikTok’s compliance with the 2109 Order allegedly revealed a flagrant, continued disregard for children’s privacy protections, the FTC took the rare step of releasing a public statement referring the complaint to the DOJ which subsequently filed suit in the Central District of California last week.  “TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” said FTC Chair Lina M. Khan.  “The FTC will continue to use the full scope of its authorities to protect children online—especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

According to the complaint, TikTok is alleged to have violated not only COPPA and the COPPA Rule but also the 2019 Order by:

  1. Knowingly allowing millions of children under thirteen to create and use TikTok accounts that are not reserved for children, enabling full access to the TikTok platform to view, make and share content without verifiable parental consent;
  2. Collecting extensive data, including personal information, from children without justification and sharing it with third parties without verifiable parental consent;
  3. Failing to comply with parents’ requests to delete their children’s accounts or personal information; and
  4. Failing to delete the accounts and information of users TikTok knows are children in direct violation of the 2019 Order. 

In highlighting a number of actions undertaken by TikTok, which allegedly led to “unlawful, massive-scale invasions of children’s privacy”, the DOJ’s complaint contains several allegations that TikTok knowingly disregarded its obligations under applicable law and under the 2019 Order requiring TikTok to prevent child users from accessing its platform without verifiable parental consent and to take measures to protect, safeguard and ensure the privacy of the information of its child users once obtained. Among others, the DOJ alleged the following illegal practices:

  • Insufficient Age Identification Practices.  Despite implementing age gates since March 2019 on its platform in efforts to direct users under thirteen to TikTok Kids Mode (a version of the app designed for younger users which allows users to view videos but not create or upload videos, post information publicly or message other users) the complaint alleges that TikTok continued to knowingly create accounts for child users that were not on Kids Mode without requesting parental consent by allowing child users to evade the age gate.  Specifically, upon entering their birthdates and being directed to Kids Mode, under-age users could simply restart the account creation process in order to provide a new birthdate to gain access to the general TikTok platform without restriction (even though TikTok knew it was the same person); alternatively, users could also avoid the age gate entirely by logging in via third-party online services in which case TikTok did not verify the user’s age at all. 
  • Unlawful and Overinclusive Data Collection from Child Users. Even where child users were directed to Kids Mode, the complaint alleges that personal information was collected from children, such as username, password and birthday as well as other persistent identifiers such as IP addresses or unique device IDs, without providing notice to parents and receiving consent as required under COPPA.  TikTok also collected voluminous account activity data which was then combined with persistent identifiers to amass profiles on child users and widely shared with third parties without justification.  For example, until at least mid-2020, TikTok is alleged to have shared information collected via Kids Modes accounts with Facebook and AppsFlyer, a third party marketing analytics firm, to increase user engagement; the collection and sharing of persistent identifiers without parental consent was unlawful under the COPPA Rule because use of such data was not limited to the purpose of providing “support” for TikTok’s “internal operations”.
  • Failures to Honor Deletion Requests.  Though the COPPA Rule and the 2019 Order required TikTok to delete personal information collected from children at their parent’s request, TikTok failed to inform parents of this right and separately to act upon such requests.  To request deletion under TikTok’s policies, TikTok allegedly employed an unreasonable and burdensome process, often times requiring parents to undertake a series of convoluted administrative actions to delete their child’s account before taking action, including scrolling through multiple webpages to find and click on a series of links and menu options that gave no clear indication that they apply to such a request.  Even where parents successfully navigated this process, their requests were infrequently honored due to rigid policies maintained by TikTok related to account deletion.[2]   The complaint also suggests that even where such accounts were deleted, TikTok maintained certain personal information related to such users, such as application activity log data, for up to eighteen months without justification.
  • Failures to Delete Accounts Independently Identified by TikTok as Children’s Accounts. In clear violation of the 2019 Order, TikTok is also alleged to have employed deficient technologies, processes and procedures to identify children’s accounts for deletion, and even appears to have ignored accounts flagged by its own human content moderators as belonging to a child and ripe for deletion.  Instead, despite strict mandates to delete such accounts, TikTok’s internal policies permitted account deletion only if rigid criteria were satisfied—such as explicit admissions by the user of their age—and provided human reviewers with insufficient resources or time to conduct even the limited review permitted under such policies.[3]

In addition to a permanent injunction to cease the infringing acts and prevent further violations of COPPA, the complaint requests that the court impose civil penalties against TikTok under the FTC Act, which allows civil penalties of up to $51,744 per violation, per day.  Given the uptick in recent enforcement related to children’s privacy issues and potential for material fines, entities should carefully consider the scope of COPPA’s coverage to their existing products and services, as well as their existing policies, practices and product functionality, to ensure compliance and avoid regulatory scrutiny.


[1] Specifically, the 2019 Order (i) imposed a $5.7 million civil penalty, (ii) required TikTok to destroy personal information of users under the age of thirteen and, by May 2019, remove accounts of users whose age could not be identified, (iii) enjoined TikTok from violating the COPPA Rule and (iv) required TikTok to retain certain records related to compliance with the COPPA Rule and the 2019 Order.

[2] According to the complaint, in a sample of approximately 1,700 children’s TikTok accounts about which TikTok received complaints and deletion requests between March 21, 2019, and December 14, 2020, approximately 500 (30%) remained active as of November 1, 2021, and several hundred were still active in March 2023.

[3] For example, despite having tens of millions of monthly active users at times since the entry of the 2019 Order, TikTok’s consent moderation team included fewer than two dozen fulltime human moderators responsible for identifying and removing material that violated all of its content related policies, including identifying and deleting accounts of unauthorized users under thirteen.  Further, during at least some periods since 2019, TikTok human moderators spent an average of only five to seven seconds reviewing each flagged account to determine if it belonged to a child.

❌
❌