❌

Reading view

There are new articles available, click to refresh the page.

Digg Tries Again, This Time As an AI News Aggregator

Digg is relaunching again, this time as an AI-focused news aggregator rather than the Reddit-style community site it recently abandoned. TechCrunch reports: On Friday evening, the founder previewed a link to the newly redesigned Digg, which now looks nothing like a Reddit clone and more like the news aggregator it once was. This time around, the site is focused on ranking news -- specifically, AI news to start. In an email to beta testers, the company said the site's goal is to "track the most influential voices in a space" and to surface the news that's actually worth "paying attention to." AI is the area it's testing this idea with, but if successful, Digg will expand to include other topics. The email warned that the site was still raw and "buggy," and was designed more to give users a first look than to serve as its public debut. On the current homepage, Digg showcases four main stories at the top: the most viewed story, a story seeing rising discussion, the fastest-climbing story, and one "In case you missed it" headline. Below that is a ranked list of top stories for the day, complete with engagement metrics like views, comments, likes, and saves. But the twist is that these metrics aren't the ones generated on Digg itself. Instead, Digg is ingesting content from X in real-time to determine what's being discussed, while also performing sentiment analysis, clustering, and signal detection to determine what matters most. [...] The site also ranks the top 1,000 people involved in AI, as well as the top companies and the top politicians focused on AI issues.

Read more of this story at Slashdot.

LinkedIn Profile Visitor Lists Belong to the People, Says Noyb

A LinkedIn user in the EU is challenging Microsoft's refusal to provide a full list of profile visitors under GDPR Article 15, arguing that the data should be available for free because LinkedIn processes it and sells a more complete version to Premium users. Privacy group Noyb says the case could set a broader precedent over whether companies can monetize user-related data while denying access to the same data through GDPR requests. "Selling data to its own users is a popular practice among companies," Noyb data protection lawyer Martin Baumann said of the case. "In reality, however, people have the right to receive their own data free of charge." The Register reports: Take a look at the language of Article 15, and it's pretty clear: data subjects (i.e., users) have the right to a copy of any and all data concerning them that's been processed by the provider. A full list of profile visitors seemingly should fall under Article 15 data -- even if it's normally reserved for paying users and presented to them in a nicer way, it should still be accessible to free users who actually request it. [...] Noyb acknowledges there's a clear bit of legal fuzz stuck in this corner of the GDPR when it comes to premium service offerings. "If any business processes a person's personal data, this information is generally covered by their right of access under the GDPR," Baumann told The Register. "It does not matter that the business would prefer to sell the data to the data subject or that it would be harmful for their business model if they would." There's only one exception in Article 15 that would give LinkedIn an out, Baumann told us, and that's the last paragraph, which says a person's right to their data can't adversely affect the rights and freedoms of others. Were LinkedIn to argue that it had to protect the identities of people who visited a data subject's profile, they could have an excuse. But not a good one, in Baumann's opinion. "Since LinkedIn does provide information about profile visits to paying Premium members, it cannot consider that disclosing the data would adversely affect the rights of the visitors whose data is disclosed," the Noyb lawyer explained. "Otherwise, providing this information to Premium users would be unlawful too." What seems to be the sticking point here is where right of access begins and a company's right to make money off data they hold (data that was, ahem, supplied by users) ends. Baumann said he hopes this case can clear the legal air. "We expect a clarification concerning the fact that personal data that can be accessed when a user pays for it is also covered by their right of access," he explained. [...] Baumann said there are numerous other cases where similar legal clarification would be appreciated, citing the example of a bank that is unwilling to provide access to account statements in response to a GDPR request, but is happy to hand over similar data for a fee. "A precedent would be welcomed," Baumann said. A LinkedIn spokesperson told The Register: "Not only is it incorrect that only Premium members can see who has viewed their profile, but we also satisfy GDPR Article 15 by disclosing the information at issue via our Privacy Policy."

Read more of this story at Slashdot.

It's Goodbye Time for Jeeves and Ask.com - Relics of Yesterday's Internet

A 1999 press release bragged "Jeeves" answered 92.3 million questions in just three months. "In the digital wilds of Y2K, we came to him with our most probing questions," remembers the New York Times β€” whether it was Britney Spears or tamagotchis: We asked, and he answered: Jeeves, the digital butler of information, the online valet who led us into the depths of cyberspace. Now, like so many other relics of yesterday's internet, Jeeves β€” and his home, Ask.com β€” are no more. After almost 30 years, the question-and-answer service and former search engine shuttered on Friday. "To you β€” the millions of users who turned to us for answers in a rapidly changing world β€” thank you for your endless curiosity, your loyalty, and your trust," the company said in a notice posted on its now-defunct website... Created in Berkeley, Calif., in the days of the dot-com gold rush, Ask Jeeves first appeared on computer screens in 1996.... Their mascot, Jeeves, was modeled on the clever English butler character from the famed P.G. Wodehouse book series. Its search function was simple β€” type in a question, get an answer. But the quality of its responses was uneven, and the website was quickly eclipsed by Google and Yahoo as the world's go-to search engines. The site was bought by InterActive Corp. for more than $1 billion in 2005, and was given an injection of cash to help it compete as a search engine. It rebranded as Ask.com and as part of the reimagining, the site also ditched the character of Jeeves in 2006. Scrappy but inventive, the site was one of the first to introduce hyperlocal map overlays to its searches and incorporate thumbnails of webpages. "They are doing a lot of clever and interesting things," a Google executive noted of Ask.com at the time. Still, Ask.com struggled to compete and returned in 2010 to its bread and butter: question-and-answer style prompts. Even then, it faltered against newer, crowdsourced iterations like Quora and Google's unyielding march to the internet fore β€” the platform now dominates search traffic, and the world's general experience of the internet. A statement at Ask.com ends "by thanking its millions of users, and saying, 'Jeeves' spirit endures'," notes this article from Engadget: As sad as it is to see a relic of the early Internet days fade into obscurity, we still have Ask Jeeves to thank for why some users still punch in full questions when querying Google. On top of that, Jeeves was built to provide detailed answers in natural language, which could have arguably acted as a precursor to today's AI chatbots like ChatGPT. "Now, Ask.com joins the Internet graveyard that includes competitors like AltaVista, which shut down in 2013," the article points out. "With Ask.com gone, alongside AIM and AOL dial-up services also sunsetting, we're truly coming to an end of a specific era of the Internet." And the New York Times argues the memory of Jeeves now rests somewhere between Limewire and Beanie Babies... Slashdot reader BrianFagioli calls it "a quiet reminder of how quickly the web moves, and how even widely recognized names can drift into obscurity once the underlying technology leaves them behind."

Read more of this story at Slashdot.

Costumed Crowd 'Speedruns' Scientology Building For Social Media Trend

Last Saturday someone dressed as Jesus "was among the dozens of people in costumes and masks seen on a video forcing open the door of a Scientology building on Hollywood Boulevard," reports the Los Angeles Times, "after a tug-of-war with a security guard." The footage posted on TikTok and Instagram shows the group sprinting up and down stairs and clashing with black-shirted security guards, giggling and gasping to catch their breath while church members scream at them to leave. On their way out β€” as security guards approach armed with fire extinguishers β€” one of the sprinters stops and dances to celebrate their successful escape, a move reminiscent of a taunt from the video game Fortnite. For weeks, groups of people have barged into two of the church's Hollywood properties, racing through hallways and tussling with security guards, trying to see how far they can get before they are forced to leave by church staff... Church officials say the incidents are not a game and have accused the speed runners of "hate crimes." After dozens on Saturday stormed the Ivar Avenue building that houses an exhibit dedicated to the church's founder, science fiction author L. Ron Hubbard, the external door handles were removed from all three of Scientology's properties on Hollywood Boulevard by Sunday morning. Guards could be seen blocking the doorway to one building on Monday afternoon... No arrests have been made. A report from the Associated Press cites a joke left on one of the videos: that if runners reach the top of the building, they'll find Tom Cruise. One commenter on a recent TikTok video of a speedrun asked why people are doing this, and another user simply replied, "because it's fun." The 18-year-old who started the trend told the Hollywood Reporter his original video has been viewed over 100 million times. "From there on out, I pretty much knew that Scientology was like a free gateway to a lot of views." Vulture notes that "there's even a Roblox re-creation of the trend, made using the 'maps; drawn from actual videos"

Read more of this story at Slashdot.

Norway Set to Become Latest Country to Ban Social Media for Under 16s

Norway plans to ban social media access for children under 16 (source paywalled; alternative source), "joining a growing number of countries responding to concerns about the potential harm kids face online," reports Bloomberg. From the report: The bill comes after "overwhelming" demand from the public, the government said Friday. It plans to bring the legislation to parliament before the end of the year. The limit will apply up until January 1 the year a child turns 16 with technology companies responsible for age verification, the government said. "We want a childhood where children get to be children," Prime Minister Jonas Gahr Store said in the statement. "Play, friendships, and everyday life must not be taken over by algorithms and screens." "Children cannot be left with the responsibility for staying away from platforms they are not allowed to use," Karianne Tung, Norway's minister of digitalization, said in the statement. "That responsibility rests with the companies providing these services." Recent Slashdot coverage of countries instituting or proposing social media bans has included Australia, France, Austria, Indonesia, and Denmark.

Read more of this story at Slashdot.

Palantir Posts Bond Villain Manifesto On X

DeanonymizedCoward writes: Engadget reports that Palantir has posted to X a summary of CEO Alex Karp and Nicholas W. Zamiska's 2025 book, The Technological Republic, which reads like a utopian idealist doodled on a Bond villain's whiteboard. While the post makes some decent points, it also highlights the Big-AI attitude that the AI surveillance state is in fact a good thing, and strongly implies that the Good Guys need to do war crimes before the Bad Guys get around to it. "The ability of free and democratic societies to prevail requires something more than moral appeal," one of the 22 points states. "It requires hard power, and hard power in this century will be built on software." The book is billed as "a passionate call for the West to wake up to our new reality," and other excerpts in the social media post include assertions such as: "Free email is not enough. The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public"; "National service should be a universal duty"; "The postwar neutering of Germany and Japan must be undone"; and "Some cultures have produced vital advances; others remain dysfunctional and regressive." The statement criticizes the West's resistance to "defining national cultures in the name of inclusivity," as well as the treatment of billionaires and the "ruthless exposure of the private lives of public figures."

Read more of this story at Slashdot.

Motorola Sues Social Media Platforms and Creators in India

"Motorola has filed a lawsuit in India against social media platforms and content creators," reports TechCrunch, "over posts it alleges are defamatory..." The lawsuit, filed in a Bengaluru court and obtained by TechCrunch, names platforms such as X, YouTube, and Instagram along with dozens of content creators, and seeks takedown of the content as well as broader restraint on what it describes as false or defamatory material related to the company's devices. In its over 60-page filing, Motorola has sought a permanent injunction restraining the defendants from publishing or sharing what it describes as false or defamatory content about its products, including reviews, videos, comments, and boycott campaigns. The complaint cites hundreds of posts across platforms, including videos alleging device issues and phones catching fire. But it is also targeting unfavorable product reviews and user commentary that the company alleges are false or defamatory. In a statement after publication, a Motorola spokesperson said it had initiated legal action "in the interest of public safety" against what it described as demonstrably false claims that its devices had exploded or caught fire. One online creator told TechCrunch "they expect more such legal action in the future, as evolving rules around online content increase liability for creators and platforms β€” a trend reflected in recently proposed changes to India's IT rules aimed at tightening oversight of online content." A Motorola spokesperson "said the company did not seek to suppress legitimate reviews or criticism and was reviewing the scope of the proceedings, adding that it apologized to creators affected inadvertently."

Read more of this story at Slashdot.

Social Media Platforms Need To Stop Never-Ending Scrolling, UK's Starmer Says

UK Prime Minister Keir Starmer said social media platforms should remove addictive infinite-scroll features for young users as Britain considers new child-safety measures. "We're consulting on whether there should be a ban for under 16s," Starmer told BBC Radio. "But I think equally important, the addictive scrolling mechanisms are really problematic to my mind. They need to go." Reuters reports: Britain, like other countries, is considering restricting access to social media for children and it is testing bans, curfews and app time limits to see how they impact sleep, family life and schoolwork. Social media companies had designed algorithms that were intended to encourage addictive behavior, and parents were asking the government to intervene, Starmer said. [...] More than 45,000 people had already responded to its consultation on children's online safety, the UK government said, adding that there was still time to contribute before a deadline of May 26. "We want to hear from mums and dads who are worried about the amount of time their children spend online and what they are viewing," Technology Secretary Liz Kendall said on Monday. "We want to hear from teenagers who know better than anyone what it is like to grow up in the age of social media. And we want to hear from families about their views on curfews, AI chatbots and addictive features."

Read more of this story at Slashdot.

Two-Week Social Media 'Detox' Erases a Decade of Age-Related Decline, Study Finds

Critics say social media is engineered to be as addictive as tobacco or gambling, writes the Washington Post β€” while adding that "the science has been moving in parallel with the court's recognition." A growing body of research links heavy social media use not only to declines in mental health but to measurable cognitive effects β€” on attention, memory and focus β€” that in some studies resemble accelerated aging. Science also suggests we have more control than we realize when it comes to reversing this damage, and the solution is surprisingly simple: Take a break... "Digital detoxes" can sound like a fad. But in one of the largest studies to date, published in PNAS Nexus and involving more than 467 participants with an average age of 32, even a short time away produced striking results β€” effectively erasing a decade of age-related cognitive decline. For 14 days, participants used a commercially available app, Freedom, to block internet access on their phones. They were still allowed calls and text messages, essentially turning a smartphone into a dumb phone. Their time online decreased from 314 minutes to 161 minutes, and by the end of the period the participants had improvements in sustained attention, mental health as well as self-reported well-being. The improvement in sustained attention was about the same magnitude as 10 years of age-related decline, the researchers noted, and the effect of the intervention on depression symptoms was larger than antidepressants and similar to that of cognitive behavioral therapy. But two things were even more mind-blowing... Even those people who cheated and broke the rules after a few days seemed to have positive effects from the break; and in follow-up reports after the two weeks, many people reported the positive effects lingered. "So you don't have to necessarily restrict yourself forever. Even taking a partial digital detox, even for a few days, seems to work," Kushlev said. The article also notes a November study at Harvard published in JAMA Network Open where nearly 400 people 'found that even a short break can make a measurable difference: After just one week of reduced smartphone use, participants reported drops in anxiety (16.1 percent), depression (24.8 percent) and insomnia (14.5 percent)..." "Other experiments point in the same direction β€” whether decreasing social media use by an hour a day for one week or stepping away from just Facebook and Instagram."

Read more of this story at Slashdot.

Are Employers Using Your Data To Figure Out the Lowest Salary You'll Accept?

MarketWatch looks at "surveillance wages," pay rates "based not on an employee's performance or seniority, but on formulas that use their personal data, often collected without employees' knowledge." According to Nina DiSalvo, policy director at labor advocacy group Towards Justice, some systems use signals associated with financial vulnerability β€” including data on whether a prospective employee has taken out a payday loan or has a high credit-card balance β€” to infer the lowest pay a candidate might accept. Companies can also scrape candidates' public personal social-media pages, she said... A first-of-its-kind audit of 500 labor-management artificial-intelligence companies by Veena Dubal, a law professor at University of California, Irvine, and Wilneida NegrΓ³n, a tech strategist, found that employers in the healthcare, customer service, logistics and retail industries are customers of vendors whose tools are designed to enable this practice. Published by the Washington Center for Equitable Growth, a progressive economic think tank, the August 2025 report... does not claim that all employers using these systems engage in algorithmic wage surveillance. Instead, it warns that the growing use of algorithmic tools to analyze workers' personal data can enable pay practices that prioritize cost-cutting over transparency or fairness... Surveillance wages don't stop at the hiring stage β€” they follow workers onto the job, too. The vendors that provide such services also offer tools that are built to set bonus or incentive compensation, according to the report. These tools track their productivity, customer interactions and real-time behavior β€” including, in some cases, audio and video surveillance on the job. Nearly 70% of companies with more than 500 employees were already using employee-monitoring systems in 2022, such as software that monitors computer activity, according to a survey from the International Data Corporation. "The data that they have about you may allow an algorithmic decision system to make assumptions about how much, how big of an incentive, they need to give to a particular worker to generate the behavioral response they seek," DiSalvo said. The article notes that Colorado introduced the "Prohibit Surveillance Data to Set Prices and Wages Act" to ban companies from setting pay rates with algorithms that use payday-loan history, location data or Google search behavior for algorithmically set. Thanks to long-time Slashdot reader sinij for sharing the article.

Read more of this story at Slashdot.

Australia Readies Social Media Court Action Citing Teen Ban Breaches

Australia is preparing possible court action against major social media platforms that are failing to enforce the country's social media ban on under-16s. "Three months after the ban came into effect, the eSafety Commissioner said it was probing Meta's Instagram and Facebook, Google's YouTube, Snapchat and TikTok for possible breaches of the law," reports Reuters. From the report: Communications Minister Anika Wells said the government was gathering evidence "so that the eSafety Commissioner can go to the Federal Court and win." "We have spent the summer building that evidence base of all the stories that no doubt you have all heard ... about how kids are getting around that," Wells told reporters in Canberra. The legal threat is a striking change of tone from a government which had hailed tech giants' shows of cooperation when the ban went live in December. Under the Australian law, platforms must show they are taking reasonable steps to keep out underage users or face fines of up to $34 million per breach, something eSafety would need to pursue in a civil court. The regulator previously said it would only take enforcement action in cases of systemic noncompliance. But in its first comprehensive compliance report since the ban took effect, eSafety said measures taken by the platforms were substandard and it would make a decision about next steps by mid-year. "We are now moving Γ’into an enforcement stance," said commissioner Julie Inman Grant in a statement. The regulator reported major compliance gaps, including platforms prompting children who had previously declared ages under 16 to do fresh age checks, allowing repeated attempts at age-assurance tests until a child got a result over 16 and poor pathways for people to report underage accounts. Some platforms did not use age-inference, which estimates age based on someone's online activity, and some only used age-assurance measures like photo-based checks after a user tried to change their age, rather than at sign-up. That made it "likely many Australian children aged under 16 have been able to create accounts on age-restricted social media platforms by simply declaring they are 16 or older", the regulator said. Nearly one-third of parents reported their under-16 child had at least one social media account after the ban took effect, of which two-thirds said the platform had not asked the child's age, it added.

Read more of this story at Slashdot.

Will Social Media Change After YouTube and Meta's Court Defeat?

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction. But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal β€” which isn't certain β€” the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices... The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy β€” and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month. Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me". The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."

Read more of this story at Slashdot.

Bluesky's Newest Product: an AI Tool That Gives You Custom Feeds

"What happens when you can describe the social experience you want and have it built for you...?" asks Bluesky? "We've just started experimenting, but we're sharing it now because we want you to build alongside us." Called "Attie" β€” because it's built with Bluesky's decentralized publishing framework, AT Protocol (which is open source) β€” the new assistant turns natural language prompts into social feeds, without users having to know how to code. (It's part of Bluesky's mission to "develop and drive large-scale adoption of technologies for open and decentralized public conversation.") Engadget reports: On the Attie website, examples include prompts like, "Show me electronic music and experimental sound from people in my network" or "Builders working on agent infrastructure and open protocol design." "It feels more like having a conversation than configuring software," [writes Bluesky's former CEO/current chief innovation officer, Jay Graber, in a blog post]. "You describe the sort of posts you want to see, and the coding agent builds the feed you described." Graber added that Attie is a separate app from Bluesky and users don't have to use the new AI assistant if they don't want to. However, since Attie and Bluesky were built on the same framework, it could mean there will be some cross-app implementation between the two or any other app built on the AT Protocol. "Attie is open for beta signups today, and we'll be sharing what we learn along the way," Graber writes in the blog post. "To learn more about Attie, visit: Attie.AI. Come help us find out what this can be." The blog post warns that "Right now, AI is undermining human agency at the same time it's enhancing it," since "The proliferation of low-quality AI-generated content is making public social networks noisier and less trustworthy..." And in a world where "signal is getting harder to find... The major platforms aren't trying to fix this problem." They're using AI to increase the time users spend on-platform, to harvest training data, and to shape what users see and believe through systems they can't inspect and didn't choose. We think AI should serve people, not platforms... An open protocol puts this power directly in users' hands. You can use it to build your own feeds, create software that works the way you want it to, and find signal in the noise. We built the AT Protocol so anyone could build any app they imagine on top of it, but until recently "anyone" really meant "anyone who can code." Agentic coding tools change that. For the first time, an open protocol can be genuinely open to everyone... The Atmosphere [Bluesky's interoperable ecosystem] is an open data layer with a clearly defined schema for applications, which makes it uniquely well-suited for coding agents to build on... Bluesky will continue to evolve as a social app millions of people rely on. Attie will be where we experiment with agentic social. AI is an accelerant on whatever it's applied to. I want it to accelerate decentralizing social and putting power back in users' hands. But I don't think the most interesting things built on AT Protocol will come from us. They're going to come from everyone who picks up these tools and starts building.

Read more of this story at Slashdot.

Austria Plans Social Media Ban For Under-14s

Austria plans to restrict under-14s from using social media platforms over concerns about addictive algorithms and harmful content. The government says draft legislation should be ready by the end of June, though details around enforcement and age verification have yet to be finalized. The BBC reports: Announcing the plans, Vice-Chancellor Andreas Babler of the Social Democrats said the government could not stand by and watch as social media made children "addicted and also often ill." He said it was the responsibility of politicians to protect children and argued that the issue should be treated no different to alcohol or tobacco: "There must be clear rules in the digital world too." In future, said Babler, children under 14 would be protected from algorithms that were addictive. "Other information providers have clear rules to protect young people from harmful content." These, he said, should now be implemented in the digital space. Yesterday, juries in two separate cases found social media giants liable for harming young people's mental health. The verdicts are being hailed as social media's Big Tobacco moment. Further reading: California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media

Read more of this story at Slashdot.

California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media

A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children's lives online. The Los Angeles Times reports: The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6. "The evolution of these applications and technology is incredible," Padilla said. "But it's changing our social dynamic and it's creating situations that, while very productive for some folks, also need some guardrails." The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.

Read more of this story at Slashdot.

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression. The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products. The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.

Read more of this story at Slashdot.

Reddit Is Weighing Identity Verification Methods To Combat Its Bot Problem

An anonymous reader quotes a report from Engadget: There could be one more step required before creating an account and posting on Reddit in the future. According to Reddit's CEO, Steve Huffman, the social media platform is exploring different ways to verify a user is human and not a bot. When asked by the TBPN podcast how to confirm that it's a human using Reddit, Huffman responded with several verification methods with varying degrees of heavy-handedness. "The most lightweight way is with something like Face ID or Touch ID," Huffman said during the interview. "They actually require a human presence, like a human has to touch, or do or look at something, so that actually just proves there's a person there or gets you pretty far." Besides these passkey methods that use biometrics data, Huffman said there are other options like relying on third-party services that are decentralized or don't require ID. On the other end of the spectrum, Huffman also mentioned more burdensome options, like ID-checking services. [...] "Part of our promise for our users is we don't know your name but we do want to know you're a person," Huffman said. "It'll be an evolution for us for a while, and probably every platform to find the right middle ground here." Reddit co-founder and former executive chair, Alexis Ohanian, said on X that Reddit requiring Face ID wasn't something he expected but agreed that something had to be done about the fake content from bots, adding that, "I just don't know how to sell face-scanning to Redditors or even lurkers." We reached out to Reddit's communications team and will update the story when we hear back. The Digg beta shut down earlier this month after failing to fight the overwhelming influx of AI-driven bots and spam. "The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts," said CEO Justin Mezzell. "We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us." "We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."

Read more of this story at Slashdot.

US Set To Receive $10 Billion Fee For Brokering TikTok Deal

The deal to take control of TikTok's U.S. business came with an unusual condition, according to people familiar with the matter. The investors β€” which include Oracle, Abu Dhabi investor MGX, and private-equity firm Silver Lake β€” "paid the Treasury Department about $2.5 billion when the deal closed in January," reports the Wall Street Journal, "and are set to make several additional payments until hitting the $10 billion total." The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said... Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacific, one of the largest fees on record for a single bank on a deal. Administration officials have said the fee is justified given Trump's role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers... The TikTok fee extracted from private-sector investors is the administration's latest transaction involving the nation's largest businesses. Trump took a nearly 10% stake in semiconductor company Intel and has agreed to take a chunk of chip sales to China from Nvidia in exchange for granting export licenses. The administration has also taken equity stakes in other companies and has a say in the operations of U.S. Steel following a "golden share" agreement with Japan's Nippon Steel in its takeover. Reuters notes earlier this month, a lawsuit was filed by investors in two of TikTok's social media rivals, seeking to reverse the approval of the deal. Thanks to long-time Slashdot reader schwit1 for sharing the news.

Read more of this story at Slashdot.

Digg Relaunch Fails

sdinfoserv writes: After running a Reddit clone for a couple of months, the Digg beta shut down again. The website is a splash memo from CEO Justin Mezzell, blaming the latest "Hard Reset" on bots. "Building on the internet in 2026 is different," writes Mezzell. "We learned that the hard way. Today we're sharing difficult news: we've made the decision to significantly downsize the Digg team..." The decision was made after struggling to gain traction and an overwhelming influx of AI-driven bots and spam. "When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority," says Mezzell. "Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us." "We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on." Despite the setback, Digg plans to rebuild with a smaller team, with founder Kevin Rose returning to work full-time on a new direction for the platform. "Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago," writes Mezzell. "He'll continue as an advisor to True Ventures, but Digg will be his primary focus." Slashback: The Rise of Digg.com

Read more of this story at Slashdot.

❌