Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Fulton County lawsuit claims feds used ‘gross mischaracterizations’ to justify raid

By: djohnson
18 February 2026 at 10:59

A former federal official who tested and certified voting machines used in Fulton County, Georgia for the 2020 presidential election told a court that the federal government misrepresented key facts and omitted exculpatory public evidence while seeking a warrant in last month’s law enforcement raid.

The raid, carried out by the FBI and overseen by Director of National Intelligence Tulsi Gabbard, saw agents seize ballots and other documentation from the Fulton County election offices. A public affidavit cited five core allegations related to the county’s recordkeeping, electronic ballot image storage,  and election night reporting. Authorities allege these issues point to a potential conspiracy to intentionally manipulate the vote count in favor of Democrat Joe Biden.

Fulton County officials sued the federal government in response, arguing that the affidavit used to obtain a warrant for the raid “does not identify facts that establish probable cause that anyone committed a crime.”

Another filing includes sworn testimony from Ryan Macias, an elections expert who tested and certified the county’s voting machines while at the Election Assistance Commission. In his testimony, Macias told the court that the government’s key claims have already been investigated and have been found to be baseless.  

He said the FBI’s “many individual omissions and misstatements” in its affidavit reflect “gross mischaracterizations” of how elections work and directly contradict the conclusions of multiple prior investigations into the Nov. 2020 election in Fulton County.

“Once the statements and omissions in the Affidavit are corrected and based on my experience administering elections, the Affidavit does not have a substantial basis in reality,” Macias stated.

For instance, the FBI’s affidavit cites the absence of scanned images of all 527,925 ballots for the original count and recount. But Macias, who served as an adviser to Fulton County and witnessed pre and post-election operations in 2020, said this was standard practice.  Jurisdictions typically send only the vote count records from their machines on election night, because ballot images and audit logs are much larger files that can slow down the reporting process.

Macias also notes that the FBI affidavit omits that this issue was already investigated by Republican Secretary of State Brad Raffensperger, who found Georgia election workers weren’t required by law to preserve such images until a state law passed in 2021.

An investigator from Raffensperger’s office later told the Board of Elections that “it was “important to note that ballots can be scanned and tabulated without capturing ballot images,” while general counsel Charlene McGowan testified that ballot images play no role in the vote tabulation process and Fulton County’s paper ballots – counted three times – were the “most important” documents to verify the count.

“These explanations about the storing of ballot images have been publicly available for some time,” Macias noted.

Similarly, the FBI cites instances where some Fulton County ballots were scanned multiple times, claiming it shows evidence of “an intentional tabulation of ballots in a false matter” to make the recount and original vote counts match. The bureau also pointed to small, non-determinative differences between the county’s machine recount and totals from a hand-counted risk-limiting audit.

But the federal government again failed to mention in its petition for a warrant that these claims were “exhaustively” investigated by the Secretary of State’s office, which found the errors were benign, the duplicates weren’t counted, and did not impact the final vote count in the state’s count of the 2020 presidential contest.

According to Macias, the government’s affidavit also contains errors about basic facts about Fulton County’s reporting process. This includes misreporting the correct official vote count and the date and time it was transmitted to state officials for tabulation.

The post Fulton County lawsuit claims feds used ‘gross mischaracterizations’ to justify raid appeared first on CyberScoop.

Lawmakers, election officials blast Trump administration after Fulton County raid 

By: djohnson
29 January 2026 at 14:31

Following a federal raid on Fulton County, Georgia’s Elections Office, lawmakers and state election officials sharply criticized  the Trump administration, accusing the White House of chasing baseless internet conspiracy theories about fraud in the 2020 election. Officials also warned the raid could set a precedent for similar federal actions targeting the 2026 midterm elections.

According to Fulton County, federal officials seized 700 boxes of records related to the 2020 election, including physical ballots. The search warrant detailing a full list of records and evidence sought by the federal government remains sealed, however, details of the warrant were published by ProPublica Wednesday evening.

In a press conference Thursday, Fulton County Board of Registration and Elections Chair Sherri Allen said the county was already planning to hand over the information at a court hearing scheduled for early February. Meanwhile, Fulton County Commission Chair Robb Pitts expressed concerns about ballot security now that the ballots are no longer in county custody.

At the National Association of Secretaries of State winter conference, Sen. Alex Padilla, D-Calif., said the federal raid should be a reminder “this can happen any point between now and this coming November.”

He also took a shot at the Trump administration’s state voter data collection efforts and the White House’s plan to conduct voter list maintenance “at the federal level.”

“Republican and Democratic secretaries: How does that make you feel about what they think about your integrity and professionalism?” Padilla said. “Those are your offices, your staff and teams.”

Jared Borg, a White House aide at the Office of Intergovernmental Affairs, gave a speech Thursday detailing how the Trump administration is repurposing the federal SAVE database as a voter citizenship verification tool.  The database was historically used to track immigrant benefits, and Borg said the DOGE-led overhaul of SAVE in 2025 came in response to requests from states for better functionality to cross-check voters. Previously, SAVE charged states $1 for each name lookup and did not allow bulk searches. Now, Borg said, state officials can run “millions of queries at no cost.”

Afterwards, Borg faced numerous questions and criticisms from state secretaries and officials who challenged the federal government’s role in setting election rules.

Some Republican state officials, like Utah Lt. Governor Deidre Henderson, pushed back hard against the Trump administration’s approach with election officials, pointing to comments from Assistant Attorney General Harmeet Dhillon and others.

“Things that have been said publicly, frankly, are quite appalling,” said Henderson, who oversees elections in her state. “She pretty much slandered all of us, and to me that’s problematic, to publicly claim that Secretaries of State are not doing our jobs and the federal government has to do it for us. That is not okay.”

Arizona Secretary of State Adrian Fontes told CyberScoop that he believes the federal government’s efforts are to serve “the grievance of one person, because he’s a sore loser, and it’s embarrassing.”

“This is outrageous that we’re still relitigating what happened six or seven years ago from a guy who is currently president of the United States,” Fontes said in an interview.

While he’s confident in the integrity of Arizona’s elections should a similar federal raid occur, Fontes noted the “enormous amount of power” prosecutors have. 

“They can do enormous damage to the integrity of systems, to the trust that people have in systems, to personal lives, and they can do it through this purportedly legal framework,” he said.

Borg said Director of National Intelligence Tulsi Gabbard, along with Homeland Security Secretary Kristi Noem, would  provide further details on the administration’s plans during appearances at the conference on Friday.

Gabbard’s presence at the Fulton County raid has puzzled and alarmed veterans of ODNI’s election team and Democratic lawmakers. Among the concerned lawmakers is Sen. Mark Warner, D-Va, who sits on the Senate Select Committee on Intelligence Committee, which oversees ODNI. 

“Why is Tulsi Gabbard at an FBI raid on an election office in Fulton County?” asked Warner, who has long focused on election security issues, from boosting federal funding for states to replace outdated equipment and coordinating with ODNI’s election threats team.

By law, ODNI and its election team are supposed to focus on foreign threats from abroad, such as  disinformation campaigns and hack-and-leak operations carried out by hostile governments. Under the Biden administration, the office had a defined process for investigating, vetting and communicating intelligence about ongoing foreign threats to victims. The office also periodically updated Congress and the public about campaigns, including where they originated, what resources were being deployed and who was being targeted.

In these briefings, officials deliberately used neutral language and avoided partisan messaging to prevent the process from appearing politicized.

One possible rationale for Gabbard’s presence: right-wing media has circulated conspiracy theories that claim foreign countries like Venezuela, China or Italy conspired with the CIA and other federal agencies to remotely hack into U.S. voting machines. After U.S. forces raided Venezuela and removed President Nicolas Maduro from power, Trump retweeted a post about one such theory called “Hammer and Scorecard.”  Weeks earlier, Trump had suggested he intended to pursue prosecutions for election fraud.

Attorney General Pam Bondi has also directly connected ongoing immigration enforcement efforts in Minnesota to the administration’s push to collect sensitive voter data from states––either voluntarily or through lawsuits. The administration and some states have used this data to aggressively challenge the eligibility of legally registered voters. These challenges often target voters over minor paperwork errors that are decades old. Experts overwhelmingly say such errors have no meaningful impact on voters’  active registration status.  

The administration has sued dozens of states, but has lost repeatedly in court. Multiple federal courts have ruled that the DOJ’s demands as legally baseless and are an unconstitutional overreach by the executive branch.

On Thursday, 26 Senate Democrats demanded briefings from Bondi and other administration officials to answer questions about the data gathering efforts. The senators noted that courts have already thrown out the administration’s lawsuits in Oregon and California.  Meanwhile, 11 states–including Texas–have provided the administration with voter data, which has “dramatically increased” the amount of voter information flowing to the federal government.

“While most states are resisting this illegal voter roll grab, we are gravely concerned by the amount of sensitive data the Department has already amassed on millions of American voters,” the senators wrote. “The Department has failed to provide Congress, or the public, any information on how it is maintaining this vast amount of data, the guardrails in place to protect state voter information, how the data is to be used, or who in the federal government has access to this sensitive data.”

The post Lawmakers, election officials blast Trump administration after Fulton County raid  appeared first on CyberScoop.

The quiet way AI normalizes foreign influence

By: Greg Otto
15 January 2026 at 09:30

Americans are being taught to trust propaganda. Often, it’s not intentional. A classic bit of advice for separating propaganda from real research is “Check the citations.” If the sources support the analysis, the material can be trusted. But AI is changing the rules of the game.

In December, the White House announced new guidance to ensure that AI tools procured for government use are “truthful” and “ideologically neutral,” including transparency around citation practices. But even with this new oversight there is a structural issue that the memo can’t fix; authoritarian states are optimizing their propaganda for AI consumption while America’s most credible news sources are actively blocking AI tools. This means that even ideologically neutral AI directs users towards state-aligned propaganda — simply because that is what is freely available.

Those who trust AI citations wind up trusting propaganda while believing they are doing responsible research.

Most large language models (LLMs) provide sources along with their analysis. But these models do not choose what sources to cite based on credibility. Rather, they choose based on availability. Many of the best sources, like top U.S. news outlets, are behind paywalls or are blocking the automated systems that AI uses to scan and collect information. These legacy media companies are slowly litigating and negotiating individual licensing deals with AI unicorns.

Authoritarian states, on the other hand, have optimized their content for accessibility. State-run media, like Qatar’s Al Jazeera, or Russian and Chinese outlets published in English, are free. That results in students, academics and federal analysts seeking to understand Gaza, Ukraine, or Taiwan being more likely to engage with state-backed propaganda than independent journalism.

Research from the Foundation for Defense of Democracies analyzing three major LLMs (ChatGPT, Claude, and Gemini) found that 57 percent of responses to questions about current international conflicts cited state-aligned propaganda sources.

When AI tools answer questions about contested conflicts — including Gaza, Ukraine, and Taiwan — they draw on enormous training data. While not perfect, the responses are often more nuanced than any one commentator or media outlet. But LLMs then funnel their hundreds of millions of users to a narrow subset of sources that it serves up as citations. FDD research found that 70 percent of neutral questions about the Israel-Gaza conflict yielded Al Jazeera citations.

This isn’t a minor technical flaw — citations are the attribution architecture shaping what Americans learn to trust.

While Western legacy media certainly carries its own biases, there is a crucial difference between editorial bias and state-controlled narratives. In 2024 alone, Russia-backed propaganda aggregator Pravda flooded the internet with more than 3.6 million articles from pro-Kremlin influencers and government spokespeople, in order to saturate the space with pro-Russian narratives.

AI sometimes fabricates information, or “hallucinate,” and that presents real risks. But urging people to “check the linked sources” can end up steering them straight to state-controlled media. Those links aren’t citations in the traditional sense — they are traffic directions. And the traffic they generate turns into revent, which ultimately determines which news outlets survive. AI platforms are becoming the internet’s traffic arbiters, and right now they’re systematically directing traffic away from independent journalism and toward state-controlled propaganda.

AI companies must bring credible journalism into their systems. There is no question that quality journalism requires resources and revenue to survive. Unfortunately, the licensing deals that are being negotiated now between LLM companies and media outlets are moving slowly. Every delay allows citation patterns to harden while we are increasingly vulnerable to foreign influence.

There’s no silver bullet, but a patchwork of solutions can help. The White House has already taken a strong stance by requiring agency heads to restrict AI procurement to LLMs that are “ideological neutral” and not “in favor of ideological dogmas.” Vendors selling to the U.S. government should present data on citation influence.

An LLM literacy campaign is needed so users understand citation bias. But awareness alone isn’t enough — AI companies should give lower priority to state-controlled media in their outputs and label them as such. And as LLMs evolve from being a consumer technology into a common infrastructure like the internet itself, citation patterns should be considered in AI safety frameworks — because a healthy democratic society needs a broad array of media sources, and that means independent journalism will always need support.

Leah Siskind is director of impact and an AI research fellow at the Foundation for Defense of Democracies.

The post The quiet way AI normalizes foreign influence appeared first on CyberScoop.

Trump pulls US out of international cyber orgs

8 January 2026 at 12:39

The Trump administration is withdrawing the United States from a handful of international organizations that work to strengthen cybersecurity.

As part of a broader pullback from 66 international organizations, the administration is leaving the Global Forum on Cyber Expertise, the Online Freedom Coalition and the European Centre of Excellence for Countering Hybrid Threats.

Trump’s decision is in line with a president who has expressed hostility toward the existing international order, an approach critics fear creates a leadership power vacuum for U.S. adversaries to fill.

“The Trump Administration has found these institutions to be redundant in their scope, mismanaged, unnecessary, wasteful, poorly run, captured by the interests of actors advancing their own agendas contrary to our own, or a threat to our nation’s sovereignty, freedoms, and general prosperity,” Secretary of State Marco Rubio said in a statement Thursday. “President Trump is clear: It is no longer acceptable to be sending these institutions the blood, sweat, and treasure of the American people, with little to nothing to show for it. The days of billions of dollars in taxpayer money flowing to foreign interests at the expense of our people are over.”

Rubio criticized the international organizations over “DEI mandates,” “‘gender equity’ campaigns” and activities that “constrain American sovereignty.”

The Global Forum on Cyber Expertise works on issues such as critical infrastructure protection, cybercrime, cyber skills and policy and emerging technology. Its members include nations and government organizations like Interpol, but also tech companies like Hewlett Packard, Mastercard and Palo Alto Networks.

The forum says it supports gender inclusivity, asserting that “gender is a cross cutting issue with direct relevance to achieving international peace and security.”

A former president of the Global Forum on Cyber Expertise Foundation, Chris Painter, said he was “ surprised” by the withdrawal.

“It’s a non-political capacity-building platform that the U.S. helped establish and that has done good work in the Western Balkans and Asian Pacific, among other places, that I think advances U.S. interests,” said Painter, also the former top cyber diplomat at the State Department.

Ron Deibert, a professor of political science and the founder and director of the University of Toronto’s Citizen Lab, said the withdrawal from the forum and the cuts at the U.S. Cybersecurity and Infrastructure Security Agency would “further erode network security coordination at a time when the magnitude of cyber threats are rapidly increasing.”

Nina Jankowicz, a former Biden administration disinformation official who now head of the American Sunlight Project, a nonprofit dedicated to fighting disinformation, took note of the Trump administration — “which claims to care about free speech” — exiting the Freedom Online Coalition, which counts as its goals the support of “free expression, association, assembly, and privacy online.”

The coalition has campaigned against cybersecurity laws that suppress human rights and cyberattacks that imperil individual safety.

The European Centre of Excellence for Countering Hybrid Threats works to protect its members, which include members of the North Atlantic Treaty Organization, from an array of threats, among them those that manifest in cyberspace.

The Trump administration also withdrew from other organizations whose work more tangentially touches on cybersecurity, such as the International Law Commission.

Whatever flaws there are with some of the organizations Trump withdrew from, they are contributors to the “international rules-based order,” Deibert said 

“Without state participation, especially the powerful rich states, these forums will grind to a halt,” he said. “Even on a symbolic level, having a government like the U.S. ‘not there’ means very little can happen on a global level. This will likely lead to more regionalization and likely greater spaces for corruption and authoritarian practices to spread.”

The U.S. decision will “inevitably weaken the rights and security of Americans and people around the world for years to come,” said Alexandra Givens, president of the Center for Democracy and Technology.

“Americans should be concerned that their government is abandoning longstanding efforts to advance democracy, defend human rights online, and stop the abuses of spyware, particularly as free expression comes under attack from governments around the world — including our own,” Givens said. “U.S. participation in international collaboration on human rights standards helps keep Americans safe.”

The post Trump pulls US out of international cyber orgs appeared first on CyberScoop.

AI, voting machine conspiracies fill information vacuum around Venezuela operation 

By: djohnson
5 January 2026 at 17:52

The surprise raid by U.S. armed forces and law enforcement agencies in Caracas, Venezuela had observers around the world scouring social media and news for updates on an operation that saw Venezuelan president Nicolás Maduro and his wife captured and flown to the United States to face criminal charges.

The Trump administration initially offered few details about the attack and reportedly declined to notify allies or the bipartisan Gang of Eight in Congress ahead of time. The information vacuum regarding the U.S. action and the motivations behind them was quickly filled by online accounts posting realistic looking but fake images and videos, right wing disinformation artists connecting the operation to debunked conspiracies of Venezuela remotely manipulating U.S. voting machines and widespread messaging in online Spanish-speaking groups depicting the U.S. as an aggressive, imperialist power seeking to control the resources of other countries.

In the early morning hours after the operation, fake imagery and media quickly flooded social media. A grainy image falsely depicted Maduro in a suit being escorted off an aircraft by camo-clad DEA agents, only for the White House to later stage and post its own (real) perp walk of Maduro online.

Guyte McCord, CEO of disinformation research firm Graphika, told CyberScoop they are observing high volumes of fairly standard activity online, from AI generated videos to ‘recycled’ footage from past conflicts being rebranded as current events.

“What we’re seeing so far is quite typical for high-attention geopolitical events: tactics designed to shape narratives and generate engagement while the ground truth remains fluid,” McCord said in a statement.

In the comment section of that White House post, users quickly posted their own realistic looking AI-altered videos, inserting other world leaders like Iranian Ayatollah Ali Khamenei in Maduro’s place, or depicting a distressed Maduro begging for his life in English while surrounded by DEA officials. A series of mislabeled and fake videos collected by the BBC’s Shayan Sardarizadeh include other depictions of Maduro’s capture that were generated through AI and spread online.

Narrative setting focused on oil, U.S. imperialism

Groups like the Digital Democracy Institute of the Americas track narratives in Latin American online spaces. The nonprofit typically monitors around 3,300 Spanish-language WhatsApp and Telegram groups, but expanded to roughly 100,000 groups to capture additional English-speaking channels discussing the Venezuela raid.

According to Cristina Tardáguila, an analyst and disinformation researcher at DDIA, the early narrative that gained widespread traction after the raid was that the US intervention “is a thinly veiled mission to seize Venezuela’s oil wealth.”

“These posts claim that President Trump has already designated American companies to manage the country’s petroleum reserves, something he affirmed,” wrote Tardáguila. “This theme characterizes the operation as ‘theft’ and ‘robbery,’ dismissing humanitarian or democratic justifications.”

Adam Darrah, a former CIA analyst who spent eight years tracking Russian disinformation operations, told CyberScoop that both Russia and China have long maintained close relations with Venezuela, viewing the country “as a beachhead into the United States’ very powerful sphere in influence here in the Western hemisphere.”

“You have these great powers going and competing for hearts and minds, and that’s what I’m seeing,” said Darrah, now vice president of intelligence at cybersecurity firm ZeroFox. “I’m seeing three adversarial governments, two of which are trying to maintain a beachhead” that is “gone, at least for now.”  

After the attack, Darrah said he has seen mouthpieces on both sides scramble to respond, leaning heavily on past narratives that portray the United States. as an imperialist aggressor, themes that were refined during the U.S. invasion and occupation of Afghanistan and Iraq.

But Tardáguila also acknowledged that the administration has not put forth clear messaging, with Trump himself saying Venezuela stole its oil reserves from the U.S.

Compounding this, she also noted that “President Donald Trump did not cite human rights or democracy in his press conference” following the attack.

Darrah told CyberScoop that like most disinformation, he believes the AI generated videos being spread around Venezuela and Maduro are more about reinforcing existing beliefs and keeping supporters  in line, rather than persuade new people or fool skeptics. 

“I have family members that clearly believe in AI-generated content…as long as the [content] makes them feel better about hating the thing they hate or loving the thing they love,” he said. “They don’t really care that it’s poorly or well done.”

A conspiracy theory lurches back to life

Domestically, some allies of President Trump quickly tied the Caracas attack to a long-running conspiracy about the 2020 election involving Venezuela and U.S. voting machines. 

Benny Johnson, a right-wing activist who has promoted claims that Dominion and Smartmatic were involved in a Venezuelan plot to alter vote counts for Joe Biden, suggested the U.S. targeted Maduro in part because he “knows where all the bodies are buried” with regards to the 2020 election.

“This is why you see the globalists around the world bricking in their pants,” Johnson said.  “They’re terrified because Venezuela was ground zero for election theft.” 

The Trump campaign lost dozens of lawsuits claiming fraud following the 2020 election and media outlets like Fox News, NewsMax as well as Trump campaign lawyers Rudi Giuliani and Sidney Powell eventually settled multibillion dollar lawsuits brought by Smartmatic and Dominion and publicly acknowledged they had no proof for their claims.

While administration officials have described the Caracas incursion as a law enforcement operation and have not cited the 2020 election, Trump himself posted a two-minute video clip without comment early Monday of people alleging Dominion voting machines were manipulated in the election to favor Biden.

The post AI, voting machine conspiracies fill information vacuum around Venezuela operation  appeared first on CyberScoop.

Advocacy group calls on OpenAI to address Sora 2’s deepfake risks

By: djohnson
12 November 2025 at 13:21

Throughout 2024, OpenAI teased the public release of Sora, its new video generation large language model, capable of creating lifelike visuals out of user prompts.

But due to concerns about the tool being used to create realistic disinformation during a  critical U.S. election year, the company delayed its release until after the elections. 

Now, a year later, critics are warning their fears about Sora’s reality distortion powers have come to pass, flooding the internet with false, fabricated or manipulated AI content, often with minimal or no labeling to indicate the media is synthetic.

““The rushed release of Sora 2 exemplifies a consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails,” wrote J.B. Branch, who leads AI accountability work at nonprofit Public Citizen, in a Nov. 11 letter addressed to OpenAI CEO Sam Altman.

Branch added that releasing Sora 2 shows  “reckless disregard” for product safety, the rights of public figures whose names or images could be deepfaked, and consumer protections against other abuses.

Public Citizen is pressing OpenAI to temporarily take the tool offline and work with outside experts to build better guardrails.

“We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines,” around Sora, the group wrote. 

Large language models have been able to create deepfakes for years, but the technology was often plagued by a string of identifiable visual cues, such as people having more than 5 fingers or videos that appear overly polished or defy the laws of physics.

Over the past year, new tools like Sora have overcome many of those technical obstacles, and can now deliver lifelike videos. The only indicator a video may be fake is a small, Sora watermark from OpenAI in the lower right corner. Cybersecurity experts say it is trivial in many cases for bad actors to remove or crop out such labeling before sharing them on social media as if they’re real.

Compounding matters, while OpenAI and other AI image and video generators have historically made efforts to prevent their tools from impersonating politicians, celebrities or copyrighted characters, Sora 2 initially launched with none of those guardrails in place. The first weeks of the release were filled with users sharing videos of Altman grilling Pikachu, a popular character from the cartoon anime Pokémon and other fictional figures protected by copyright law.

In response to CyberScoop’s request for comment, an OpenAI spokesperson said the company adds visible watermarks to Sora videos and tracks their origins using metadata standards like the Coalition for Content Provenance and Authenticity. OpenAI also uses reverse-image, audio, and video tools to identify Sora-generated videos online.

“We have multiple guardrails intended to ensure that a living person’s likeness can’t be generated in Sora unless they’ve intentionally uploaded a cameo and given consent for it to be used,” the spokesperson said. “The feature is fully opt-in, backed by video-and-audio verification, and users control who can use their cameo. They can revoke access or remove any video that includes it at any time.”

The spokesperson referenced Sora’s system card, which describes technical specs and model limitations. It says, “where real people are featured in videos, additional model safeguards will apply” to prevent “non-consensual nudity or racy output, graphic violence, or output that could be used for certain fraudulent purposes.”

The document also acknowledges limits, noting, “some deceptive content is highly contextual and not easily detectable by classifiers” and that “there is not a single solution to provenance.” OpenAI said it plans to keep improving Sora’s safeguards.

Bala Kumar, chief product and technology officer at Jumio, said Sora 2 “lowers the barrier to deepfakes for everyone in the general public.”

“But what makes it accessible to everyday people makes it vulnerable to bad actors for misuse,” Kumar added. “While there’s a small watermark on these videos, fraudsters can easily remove it.”

In October, following objections from actor Bryan Cranston and the Screen Actors Guild-American Federation of Television Artists (SAG-AFTRA), OpenAI changed its policy to prevent Sora from generating videos of live celebrities or copyrighted figures.

However, that still allows people to create realistic and disruptive deepfakes without breaking OpenAI’s rules. For instance, the prohibition on public figures only extends to living people, meaning users can still generate videos of dead public figures.

This has led to videos that seem like harmless fun, such as rappers Tupac Shakur and The Notorious B.I.G. participating in a pro-wrestling-styled feud or singer Michael Jackson dancing at fast food restaurants and stealing chicken from customers.

But as the Washington Post has reported, Sora 2 has also been used to create racist videos of deceased public figures, like Martin Luther King Jr. stuttering and drooling, or John F. Kennedy joking about the assassination of right-wing personality Charlie Kirk. OpenAI called the videos of King Jr. “disrespectful” and pulled them offline after his relatives complained.

Beyond historical figures, Sora and other tools can easily be used to generate fake videos that tap into current political issues of the moment for virality. One recent example was a series of videos depicting Americans angrily reacting to food prices at grocery stores, in their cars and other locations.

The videos came while Congress and the White House were in a standoff over government funding, including the money needed for the Supplemental Nutrition Assistance Program (SNAP).  The videos showed AI-generated people saying things like “I ain’t paying for none of this s–t” and “It is the taxpayer’s responsibility to take care of my kids!”

It’s not clear what model was used to generate the videos, though some briefly flash a recognizable Sora watermark, but media outlets like Fox News initially published stories that treated the clips as genuine, with headlines like “SNAP beneficiaries threaten to ransack stores over government shutdown.” Fox News later updated its story and headline to note that the stories were AI-generated, and the story appears to have since been removed from the outlet’s website. 

Outside of politics, these tools have plenty of potential to upend the lives of ordinary Americans who don’t hold power or appear on television. The most popular use of deepfakes by far in the generative AI era has been for nonconsensual pornography targeting women.

Although Public Citizen’s letter  doesn’t accuse people of using Sora 2 to generate pornography, it criticizes OpenAI for allowing“non-nude fetish content” to proliferate on Sora’s social media platform.

“There is a dangerous lack of moderation pertaining to underage individuals depicted in sexual contexts, making Sora 2 unsuitable for public use,” Public Citizen wrote.

The post Advocacy group calls on OpenAI to address Sora 2’s deepfake risks appeared first on CyberScoop.

❌
❌