Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

YouTube Opens 'Second Chance' Program To Creators Banned For Misinformation

By: BeauHD
9 October 2025 at 18:40
YouTube has launched a "second chance" program allowing some creators previously banned for COVID-19 or election misinformation to apply for new channels, as long as their violations were tied to policies that have since been deprecated. Bans for copyright or severe misconduct still remain permanent. The Verge reports: Under political pressure, the company had said last month that it was going to set up this pilot program for "a subset of creators" and "channels terminated for policies that have been deprecated." [...] The new pilot program kicks off today and will roll out to "eligible creators" over the "next several weeks," YouTube says. "We'll consider several factors when evaluating requests for new channels, like whether the creator committed particularly severe or persistent violations of our Community Guidelines or Terms of Service, or whether the creator's on- or off-platform activity harmed or may continue to harm the YouTube community." The pilot won't be available if you were banned for copyright infringement or for violating YouTube's Creator Responsibility policies, the company says. If you deleted your YouTube channel or Google account, you won't be able to request a new channel "at this time." And YouTube notes that if your channel has been banned, you won't be eligible to apply for a new one until one year after it was terminated. "We know many terminated creators deserve a second chance -- YouTube has evolved and changed over the past 20 years, and we've had our share of second chances to get things right with our community too," YouTube says. "Our goal is to roll this out to creators who are eligible to apply over the coming months, and we appreciate the patience as we ramp up, carefully review requests, and learn as we go."

Read more of this story at Slashdot.

YouTube Reinstating Creators Banned For COVID-19, Election Content

By: BeauHD
23 September 2025 at 18:00
YouTube's parent company, Alphabet, said it will reinstate creators previously banned for spreading COVID-19 misinformation and false election claims, citing free expression and shifting policy guidelines. The Hill reports: "Reflecting the Company's commitment to free expression, YouTube will provide an opportunity for all creators to rejoin the platform if the Company terminated their channels for repeated violations of COVID-19 and elections integrity policies that are no longer in effect," the company said in a letter to Rep. Jim Jordan (R-Ohio), chair of the House Judiciary Committee. "YouTube values conservative voices on its platform and recognizes that these creators have extensive reach and play an important role in civic discourse. The Company recognizes these creators are among those shaping today's online consumption, landing 'must-watch' interviews, giving viewers the chance to hear directly from politicians, celebrities, business leaders, and more," it added in the five-page correspondence. Alphabet blamed the Biden administration for limiting political speech on the platform. "Senior Biden Administration officials, including White House officials, conducted repeated and sustained outreach to Alphabet and pressed the Company regarding certain user-generated content related to the COVID-19 pandemic that did not violate its policies," the letter read. "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the Company to remove non-violative user-generated content," it continued. Guidelines were changed after former President Biden took office and urged platforms to remove content that encouraged citizens to drink bleach to cure COVID-19, as President Trump suggested in 2020, or join insurrection efforts launched on Jan. 6, 2021, to overthrow his 2020 presidential win. But the company said the Biden administration's decisions were "unacceptable" and "wrong," while noting it would forgo future fact-checking mechanisms and instead allow users to add context notes to content.

Read more of this story at Slashdot.

AI Generated 'Boring History' Videos Are Flooding YouTube, Drowning Out Real History

By: BeauHD
3 September 2025 at 18:00
An anonymous reader quotes a report from 404 Media, written by Jason Koebler: As I do most nights, I was listening to YouTube videos to fall asleep the other night. Sometime around 3 a.m., I woke up because the video YouTube was autoplaying started going "FEEEEEEEE." The video was called "Boring History for Sleep | How Medieval PEASANTS Survived the Coldest Nights and more." It is two hours long, has 2.3 million views, and, an hour and 15 minutes into the video, the AI-generated voice glitched. "In the end, Anne Boleyn won a kind of immortality. Not through her survival, but through her indelible impact on history. FEEEEEEEEEEEEEEEE," the narrator says in a fake British accent. "By the early 1770s, the American colonies simmered like a pot left too long over a roaring fire," it continued. The video was from a channel I hadn't seen before, called "Sleepless Historian." I took my headphones out, didn't think much of it at the time, rolled over, and fell back asleep. The next night, when I went to pick a new video to fall asleep to, my YouTube homepage was full of videos from Sleepless Historian and several similar-sounding channels like Boring History Bites, History Before Sleep, The Snoozetorian, Historian Sleepy, and Dreamoria. Lots of these videos nominally check the boxes for what I want from something to fall asleep to. Almost all of them are more than three hours long, and they are about things I don't know much about. Some video titles include "Unusual Medieval Cures for Common Illnesses," "The Entire History of the American Frontier," "What It Was Like to Visit a BR0THEL in Pompeii," and "What GETTING WASTED Was Like in Medieval Times." One of the channels has even been livestreaming this "history" 24/7 for weeks. In the daytime, when I was not groggy and half asleep, it quickly became obvious to me that all of these videos are AI generated, and that they are part of a sophisticated and growing AI slop content ecosystem that is flooding YouTube, is drowning out human-made content created by real anthropologists and historians who spend weeks or months researching, fact-checking, scripting, recording, and editing their videos, and are quite literally rewriting history with surface-level, automated drek that the YouTube algorithm delivers to people. YouTube has said it will demonetize or otherwise crack down on "mass produced" videos, but it is not clear whether that has had any sort of impact on the proliferation of AI-generated videos on the platform, and none of the people I spoke to for this article have noticed any change. "It's completely shocking to me," Pete Kelly, who runs the popular History Time YouTube channel, told Koebler in a phone interview. "It used to be enough to spend your entire life researching, writing, narrating, editing, doing all these things to make a video, but now someone can come along and they can do the same thing in a day instead of it taking six months, and the videos are not accurate. The visuals they use are completely inaccurate often. And I'm fearful because this is everywhere." "I absolutely hate it, primarily the fact that they're historically inaccurate," Kelly added. "So it worries me because it's just the same things being regurgitated over and over again. [...] It's worrying to me just for humanity. Not to get too high brow, but it's not good for the state of knowledge in the world. It makes me worry for the future."

Read more of this story at Slashdot.

YouTube Is Pausing Premium Family Plans if You Aren't Watching From the Same Address

By: msmash
2 September 2025 at 13:22
An anonymous reader shares a report: If you're sharing an ad-free YouTube Premium or YouTube Music account with friends or family who live outside of your home, you could lose your premium privileges. Customers who lose these can still watch YouTube or listen to music with ads -- but let's be real, it's not the same. Multiple reports have shown people who have the service have been receiving notices that their premium service will be paused for 15 days due to violating a policy that's been in place since 2023. On its support page, YouTube says that an account manager can add up to five family members in a household to their Premium membership. But, the post says, "Family members sharing a YouTube family plan must live in the same household as the family manager."

Read more of this story at Slashdot.

YouTube's Sneaky AI 'Experiment': Is Social Media Embracing AI-Generated Content?

24 August 2025 at 03:34
The Atlantic reports some YouTube users noticed their uploaded videos have since "been subtly augmented, their appearance changing without their creators doing anything..." "For creators who want to differentiate themselves from the new synthetic content, YouTube seems interested in making the job harder." When I asked Google, YouTube's parent company, about what's happening to these videos, the spokesperson Allison Toh wrote, "We're running an experiment on select YouTube Shorts that uses image enhancement technology to sharpen content. These enhancements are not done with generative AI." But this is a tricky statement: "Generative AI" has no strict technical definition, and "image enhancement technology" could be anything. I asked for more detail about which technologies are being employed, and to what end. Toh said YouTube is "using traditional machine learning to unblur, denoise, and improve clarity in videos," she told me. (It's unknown whether the modified videos are being shown to all users or just some; tech companies will sometimes run limited tests of new features.) While running this experiment, YouTube has also been encouraging people to create and post AI-generated short videos using a recently launched suite of tools that allow users to animate still photos and add effects "like swimming underwater, twinning with a lookalike sibling, and more." YouTube didn't tell me what motivated its experiment, but some people suspect that it has to do with creating a more uniform aesthetic across the platform. As one YouTube commenter wrote: "They're training us, the audience, to get used to the AI look and eventually view it as normal." Google isn't the only company rushing to mix AI-generated content into its platforms. Meta encourages users to create and publish their own AI chatbots on Facebook and Instagram using the company's "AI Studio" tool. Last December, Meta's vice president of product for generative AI told the Financial Times that "we expect these AIs to actually, over time, exist on our platforms, kind of in the same way that [human] accounts do...." This is an odd turn for "social" media to take. Platforms that are supposedly based on the idea of connecting people with one another, or at least sharing experiences and performances — YouTube's slogan until 2013 was "Broadcast Yourself" — now seem focused on getting us to consume impersonal, algorithmic gruel.

Read more of this story at Slashdot.

Big Tech’s Mixed Response to U.S. Treasury Sanctions

3 July 2025 at 12:06

In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.

On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.

The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.

It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.

The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.

Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.

In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.

Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.

These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.

Liu Lizhi’s PayPal account.

In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.

“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”

The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.

In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.

Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.

However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.

One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).

Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.

In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.

All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.

“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”

Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”

“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.

Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.

Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.

“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”

Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.

In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.

The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.

“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.

Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.

“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”

Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.

Update, July 7, 6:56 p.m. ET: In a written statement, PayPal said it continually works to combat and prevent the illicit use of its services.

“We devote significant resources globally to financial crime compliance, and we proactively refer cases to and assist law enforcement officials around the world in their efforts to identify, investigate and stop illegal activity,” the statement reads.

❌
❌