❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 18 October 2025Main stream

Salesforce Sued By Authors Over AI Software

By: msmash
17 October 2025 at 16:41
An anonymous reader shares a report: Cloud-computing firm Salesforce was hit with a proposed class action lawsuit by two authors who alleged the company used thousands of books without permission to train its AI software. Novelists Molly Tanzer and Jennifer Gilmore said in the complaint that Salesforce infringed copyrights by using their work to train its xGen AI models to process language.

Read more of this story at Slashdot.

Global Investors Position India as Anti-AI Play

By: msmash
17 October 2025 at 15:20
Foreign institutional investors have pulled nearly $30 billion from Indian equity markets over the past twelve months. A substantial portion of that capital moved to Korea and Taiwan. Foreign portfolio investor ownership in stocks listed on India's National Stock Exchange fell from 22.2% in September 2024 to 17.3% in May 2025. Taiwan absorbed $15 billion of net foreign inflows in the third quarter of 2025 alone. HSBC analysts say global investors increasingly view India through the lens of AI economics and are positioning the world's most populous nation as a global anti-AI play. India employs roughly 20 million people directly and indirectly in IT services. Services account for 55% of Indian gross domestic product. HSBC estimates digital AI agents cost approximately one-third as much as human agents for customer support and certain mid-office functions. Global tech giants will spend two trillion dollars on AI infrastructure between 2025 and 2030. India's AI Mission committed $1.25 billion over five years beginning March 2024.

Read more of this story at Slashdot.

Creator of Infamous AI Painting Tells Court He's a Real Artist

By: msmash
17 October 2025 at 14:40
Jason Allen has responded to critics who say he is not an artist by filing a new brief and announcing plans to sell oil-print reproductions of his AI-generated image. Allen won the Colorado State Fair Fine Arts Competition in 2022 after submitting Theatre D'opera Spatial, which Midjourney created. He said in a press release that being called an artist does not concern him but his work and expression do. Allen says he asked himself what could make the piece undeniably art and decided to create physical reproductions using technology. The reproductions employ a three-dimensional printing technique from a company called Arius that uses oil paints to simulate brushstrokes. Allen said the physical artifact is singular and real. His legal filing argues that he produced the artwork by providing hundreds of iterative text prompts to Midjourney and experimenting with over six hundred prompts before cropping and upscaling the final image. The U.S. Copyright Office has rejected his copyright applications for three years. The office maintains that Midjourney does not treat text prompts as direct instructions.

Read more of this story at Slashdot.

Before yesterdayMain stream

Open Source GZDoom Community Splinters After Creator Inserts AI-Generated Code

By: BeauHD
16 October 2025 at 17:30
An anonymous reader quotes a report from Ars Technica: If you've even idly checked in on the robust world of Doom fan development in recent years, you've probably encountered one of the hundreds of gameplay mods, WAD files, or entire commercial games based on GZDoom. The open source Doom port -- which can trace its lineage back to the original launch of ZDoom back in 1998 -- adds modern graphics rendering, quality-of-life additions, and incredibly deep modding features to the original Doom source code that John Carmack released in 1997. Now, though, the community behind GZDoom is publicly fracturing, with a large contingent of developers uniting behind a new fork called UZDoom. The move is in apparent protest of the leadership of GZDoom creator and maintainer Cristoph Oelckers (aka Graf Zahl), who recently admitted to inserting untested AI-generated code into the GZDoom codebase. "Due to some disagreements -- some recent; some tolerated for close to 2 decades -- with how collaboration should work, we've decided that the best course of action was to fork the project," developer Nash Muhandes wrote on the DoomWorld forums Wednesday. "I don't want to see the GZDoom legacy die, as do most all of us, hence why I think the best thing to do is to continue development through a fork, while introducing a different development model that highly favors transparent collaboration between multiple people." [...] Zahl defended the use of AI-generated snippets for "boilerplate code" that isn't key to underlying game features. "I surely have my reservations about using AI for project specific code," he wrote, "but this here is just superficial checks of system configuration settings that can be found on various websites -- just with 10x the effort required." But others in the community were adamant that there's no place for AI tools in the workflow of an open source project like this. "If using code slop generated from ChatGPT or any other GenAI/AI chatbots is the future of this project, I'm sorry to say but I'm out," GitHub user Cacodemon345 wrote, summarizing the feelings of many other developers. In a GitHub bug report posted Tuesday, user the-phinet laid out the disagreements over AI-generated code alongside other alleged issues with Zahl's top-down approach to pushing out GZDoom updates.

Read more of this story at Slashdot.

Logitech Open To Adding an AI Agent To Board of Directors, CEO Says

By: msmash
16 October 2025 at 14:10
Hanneke Faber, CEO of global tech manufacturing company Logitech, says she'd be open to the idea of having an AI-powered board member. From a report: "We already use [AI agents] in almost every meeting," Faber said at the Fortune Most Powerful Women Summit in Washington, D.C., on Monday. While she said AI agents today (like Microsoft Copilot and internal bots) mostly take care of summarization and idea generation, that's likely to change owing to the pace at which the technology is developing. "As they evolve -- and some of the best agents or assistants that we've built actually do things themselves -- that comes with a whole bunch of governance things," Faber said. "You have to keep in mind and make sure you really want that bot to take action. But if you don't have an AI agent in every meeting, you're missing out on some of the productivity." "That bot, in real time, has access to everything," she continued.

Read more of this story at Slashdot.

California Enacts Landmark AI Safety Law But With Very Narrow Applicability

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The path to TFAIA was paved by failure. TFAIA’s predecessor SB 1047[3] overwhelmingly passed the legislature last year, but was ultimately blocked at the Governor’s desk. In his veto statement, Governor Newsom called for an approach to frontier model regulation β€œinformed by an empirical trajectory analysis of AI systems and capabilities,” criticizing SB 1047 for applying stringent standards to even the most basic functions[4]. TFAIA thus represents a strategic pivot to regulation focused only on the most impactful AI models, which eliminates the kill switch requirement (which would mandate full shutdown capabilities for noncompliant systems), rigid testing and auditing regime and aggressive 72-hour timeline for incident reporting that doomed its predecessor.

TFAIA serves as California’s attempt to strike the balance of advancing AI innovation and competition while underscoring accountability for responsible AI development. The Act aims to bolster public trust and increase awareness of AI-specific risks by requiring developers to think critically about frontier AI capabilities.

Scope and Thresholds

Scoped narrowly to target the most powerful models capable of significant and catastrophic impact, TFAIA imposes certain requirements on β€œfrontier models,” defined as foundation models (or general purpose models that are trained on broad data sets) trained using or intending to use a quantity of computing power greater than 10^26 integer or floating-point operations.[5]Β  In particular, all β€œfrontier developers” (or persons that β€œtrained or initiated the training” of frontier models) face baseline transparency requirements, with more burdensome obligations imposed on β€œlarge frontier developers” (namely, frontier developers that, together with affiliates, had annual gross revenues above $500 million in the preceding year).

Tailoring its scope even further, TFAIA focuses many of its requirements on prevention of β€œcatastrophic risk”, defined as a foreseeable and material risk that a frontier model could (1) materially contribute to the death or serious injury of 50 or more people or (2) cause at least $1 billion in damages to property, in either case, arising from a single incident involving a frontier model, doing any of the following: (a) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (b) engaging in criminal conduct (conduct that would constitute murder, assault, extortion or theft) or a cyberattack, without meaningful human intervention; or (c) evading the control of its frontier developer or user.

Key Compliance Provisions

TFAIA imposes certain requirements on all frontier models, with heightened obligations on large frontier model developers:

  1. Transparency Reports. At or before the time of deploying a frontier model (or a substantially modified version of an existing frontier model), frontier model developers must implement and publish a transparency report on their website. Reports, which can under the Act be embedded in model or system cards, must include (a) the website of the frontier developer, (b) model details (e.g., release date, languages supported, intended uses, modalities, restrictions) and (c) mechanisms by which a person can communicate with the frontier developer.[6]
    Large frontier developers must further (x) include summaries of assessments of catastrophic risks resulting from use of the frontier model, the results of such assessments, the role of any third-party evaluators and the steps taken to fulfill the requirements of the frontier AI framework (see below) and (y) transmit to the Office of Emergency Services reports of any assessments of catastrophic risk resulting from internal use of their frontier models every three months or pursuant to another reasonable schedule specified by the developer. Β The Act tasks the Office of Emergency Services with establishing a mechanism by which large frontier developers can confidentially submit such assessment reports of catastrophic risk.
  2. Critical Safety Incident Reporting. Frontier developers are required to report β€œcritical safety incidents”[7] to the Office of Emergency Services within 15 days of discovery.Β  To the extent a critical safety incident poses imminent risk of death or serious physical injury, the reporting window is shortened to 24 hours, with disclosure required to an appropriate authority based on the nature of the incident and as required by law.Β  Note, critical safety incidents pertaining to foundation models that do not qualify as frontier models are not required to be reported.Β  Importantly, TFAIA exempts the following reports from disclosure under the California Public Records Act: reports regarding critical safety incidents, reports of assessments of catastrophic risk and covered employee reports made pursuant to the whistleblower protections described below.Β 
  3. Frontier AI Frameworks for Large Frontier Developers. In addition to the above, large frontier developers must publish an annual (or, upon making a material modification to its framework, within 30 days of such modification) frontier AI framework describing the technical and organizational protocols relied upon to manage and assess how catastrophic risks are identified, mitigated, and governed. The framework must include documentation of a developer’s alignment with national/international standards, governance structures, thresholds used to identify and assess the frontier model’s capabilities to pose a catastrophic risk, mitigation processes (including independent review of potential for catastrophic risks and effectiveness of mitigation processes) and cybersecurity practices and processes for identifying and responding to critical safety incidents.Β  Large frontier developers are prohibited from making false or misleading claims about catastrophic risks from their frontier models or their compliance with their published frontier AI framework.Β  Additionally, these developers are permitted to redact information necessary to protect trade secrets, cybersecurity, public safety or national security or as required by law as long as they maintain records of unredacted versions for a period of at least five years.

Other Notable Provisions

In addition to the requirements imposed on frontier models, TFAIA resurrects CalComputeβ€”a consortium tasked with development of a framework for the creation of a public cloud computing cluster first envisioned under SB 1047β€”which provides for access to advanced computing capabilities to support safe, equitable and sustainable AI development and deployment in the public interest.Β 

TFAIA also enhances protections for whistleblowers by (1) prohibiting frontier developers from adopting rules that would prevent employees from reporting catastrophic risks or retaliating against employees who report such risks, (2) requiring frontier developers to provide notice to their employees once a year of their rights as whistleblowers and (3) requiring large frontier developers to implement and maintain anonymous internal reporting channels. Notably, whistleblowers are empowered to bring civil actions for injunctive relief (as well as recovery of attorneys’ fees) against frontier developers for violations of their rights under the Act.

Enforcement and Rulemaking

Large frontier developers that fail to publish TFAIA-compliant reports or other documentation, make a false statement about catastrophic risk or their compliance with their frontier AI framework, fail to report a critical safety incident or fail to comply with their frontier AI framework could face penalties up to $1 million per violation, scaled to the severity of the offense. Such penalties can only be recovered by the Attorney General bringing a civil action.Β 

To ensure that the applicability of the TFAIA reflects technological change, the Act empowers the California Department of Technologyβ€”as opposed to the Attorney General as envisioned under SB 1047β€”to assess technological developments, research and international standards and recommend updates to key statutory definitions (of β€œfrontier model,” β€œfrontier developer” and β€œlarge frontier developer”) on or before January 1, 2027 and annually thereafter.Β 

Key Takeaways

With TFAIA, California provides a blueprint for regulations focused on the most impactful and powerful AI technology, establishing transparency, disclosure, and governance requirements for frontier model developers. Β A similar bill, the Responsible AI Safety and Education (RAISE) Act, regulating frontier models awaits the signature of Governor Hochul in New York.Β  Although TFAIA and RAISE have similar applicability and frameworks,[8] RAISE imposes stricter requirements (72-hour window for reporting safety incidents) and higher penalties (up to $10 million for a first violation and $30 million for subsequent ones), similar to the failed SB 1047.Β  TFAIA’s success in navigating gubernatorial approvalβ€”where SB 1047 failedβ€”demonstrates the effectiveness of a transparency-first approach over prescriptive mandates (as TFAIA largely focuses on disclosure requirements for covered models whereas RAISE does not require transparency reporting to the same extent nor include whistleblower protections, instead focusing on enforcement by imposing strict liability and strictly prohibiting models that create unreasonable risk of critical harms), suggesting the RAISE Act may be subject to further narrowing, or even a veto, by Governor Hochul.Β 

Most businesses, including the vast majority of AI developers, will be relieved that TFAIA has such narrow applicability.Β  For the few businesses that might meet TFAIA’s applicability thresholds, the law represents both immediate compliance obligations and a preview of the regulatory landscape to come. These businesses should:

  1. Conduct a threshold analysis to determine frontier developer or large frontier developer status
  2. Review existing AI safety practices against TFAIA requirements, particularly focusing on safety framework documentation and incident reporting capabilities
  3. Develop comprehensive frontier AI frameworks addressing the law’s required elements, including governance structures, risk assessment thresholds and cybersecurity practices
  4. Implement robust documentation systems to support transparency reporting requirements for model releases and modifications
  5. Create incident response procedures to identify and report critical safety incidents within required timelines (15-day standard, 24-hour emergency)
  6. Update whistleblower reporting mechanisms and ensure employees receive notice of their rights under the law
  7. Develop scalable compliance frameworks accommodating varying state requirements as other states, including New York, consider similar AI safety laws
  8. Consider voluntary adoption of TFAIA-style frameworks as industry best practices, even for companies below current thresholds

[1] The text of the Act can be found here.

[2] AB 2013 requires developers of generative AI systems to post documentation on their website describing the dataset(s) used for system training.

[3] The text of SB 1047 can be found here.

[4] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

[5] The computing power minimum includes computing from both initial training and subsequent fine-tuning or modifications.

[6] Notably, frontier developers can redact portions of their transparency reports to protect trade secrets and guard against cybersecurity or public safety threats; however, any such redactions must be justified within the repot which must be maintained for 5 years.

[7] The Act defines a β€œcritical safety incident” to mean any of the following: (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.

[8] Unlike TFAIA, RAISE instead applies only to β€œlarge developers” defined as persons thatΒ  have (1) trainedΒ  atΒ  leastΒ  one frontier model and (2) spent over $100 million in aggregate compute costs in training frontier models.

Are AI Agents Compromised By Design?

By: BeauHD
14 October 2025 at 19:20
Longtime Slashdot reader Gadi Evron writes: Bruce Schneier and Barath Raghavan say agentic AI is already broken at the core. In their IEEE Security & Privacy essay, they argue that AI agents run on untrusted data, use unverified tools, and make decisions in hostile environments. Every part of the OODA loop (observe, orient, decide, act) is open to attack. Prompt injection, data poisoning, and tool misuse corrupt the system from the inside. The model's strength, treating all input as equal, also makes it exploitable. They call this the AI security trilemma: fast, smart, or secure. Pick two. Integrity isn't a feature you bolt on later. It has to be built in from the start. "Computer security has evolved over the decades," the authors wrote. "We addressed availability despite failures through replication and decentralization. We addressed confidentiality despite breaches using authenticated encryption. Now we need to address integrity despite corruption." "Trustworthy AI agents require integrity because we can't build reliable systems on unreliable foundations. The question isn't whether we can add integrity to AI but whether the architecture permits integrity at all."

Read more of this story at Slashdot.

Walmart, ChatGPT Team Up For Shopping

By: BeauHD
14 October 2025 at 18:40
Walmart announced a new partnership with OpenAI that will let customers shop using ChatGPT. "For many years now, eCommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change," Walmart CEO Doug McMillon said in a statement. NBC News reports: It was unclear Tuesday what the terms of the Walmart-AI partnership would be. The announcement also did not say when shoppers can expect to see ChatGPT integrated with their Walmart online shopping experiences, only that it's coming "soon." The OpenAI announcement is part of a broader push by Walmart, the biggest private employer in the U.S., to incorporate AI into its daily operations. "We're excited to partner with Walmart to make everyday purchases a little simpler. It's just one way AI will help people every day under our work together," Sam Altman, the co-founder and CEO of OpenAI, said in a statement. The partnership could also serve OpenAI by introducing ChatGPT to a massive set of consumers who may not be as accustomed to using AI chats in their shopping as OpenAI's core user base. "There is a native AI experience coming that is multi-media, personalized and contextual," said Walmart's McMillon.

Read more of this story at Slashdot.

Salesforce Says AI Customer Service Saves $100 Million Annually

By: msmash
14 October 2025 at 17:21
Salesforce says it's saving about $100 million a year by using AI tools in the software company's customer service operations. From a report: The company is working to sell AI features that can handle work such as customer service or early-stage sales. To illustrate the value of the Agentforce product to business clients, Salesforce has been vocal about its own use of the technology. Chief Executive Officer Marc Benioff announced the statistic on Salesforce's savings during a speech Tuesday at the annual Dreamforce conference in San Francisco. The company said more than 12,000 customers are using Agentforce. For example, Reddit was able to cut customer support resolution time by 84%, Salesforce said.

Read more of this story at Slashdot.

Lawyer Caught Using AI While Explaining to Court Why He Used AI

By: msmash
14 October 2025 at 16:01
An anonymous reader shares a report: An attorney in a New York Supreme Court commercial case got caught using AI in his filings, and then got caught using AI again in the brief where he had to explain why he used AI, according to court documents filed earlier this month. New York Supreme Court Judge Joel Cohen wrote in a decision granting the plaintiff's attorneys' request for sanctions that the defendant's counsel, Michael Fourte's law offices, not only submitted AI-hallucinated citations and quotations in the summary judgment brief that led to the filing of the plaintiff's motion for sanctions, but also included "multiple new AI-hallucinated citations and quotations" in the process of opposing the motion. "In other words," the judge wrote, "counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI." The case itself centers on a dispute between family members and a defaulted loan. The details of the case involve a fairly run-of-the-mill domestic money beef, but Fourte's office allegedly using AI that generated fake citations, and then inserting nonexistent citations into the opposition brief, has become the bigger story.

Read more of this story at Slashdot.

Indonesia's Film Industry Embraces AI To Make Hollywood-style Movies For Cheap

By: msmash
14 October 2025 at 15:22
Indonesia's film industry has started using generative AI tools to produce films at a fraction of Hollywood budgets. The country's filmmakers are deploying ChatGPT for scriptwriting, Midjourney for image generation, and Runway for video storyboarding. VFX artist Amilio Garcia Leonard told Rest of World that AI has reduced his draft editing time by 70%. The Indonesian Film Producer Association supports the technology. Indonesian films typically cost 10 billion rupiah ($602,500), less than 1% of major Hollywood productions. The sector employed about 40,000 people in 2020 and generated over $400 million in box office sales in 2023. Jobs for storyboarders, VFX artists, and voice actors are disappearing.

Read more of this story at Slashdot.

Generative AI Systems Miss Vast Bodies of Human Knowledge, Study Finds

By: msmash
14 October 2025 at 14:01
Generative AI models trained on internet data lack exposure to vast domains of human knowledge that remain undigitized or underrepresented online. English dominates Common Crawl with 44% of content. Hindi accounts for 0.2% of the data despite being spoken by 7.5% of the global population. Tamil represents 0.04% despite 86 million speakers worldwide. Approximately 97% of the world's languages are classified as "low-resource" in computing. A 2020 study found 88% of languages face such severe neglect in AI technologies that bringing them up to speed would require herculean efforts. Research on medicinal plants in North America, northwest Amazonia and New Guinea found more than 75% of 12,495 distinct uses of plant species were unique to just one local language. Large language models amplify dominant patterns through what researchers call "mode amplification." The phenomenon narrows the scope of accessible knowledge as AI-generated content increasingly fills the internet and becomes training data for subsequent models.

Read more of this story at Slashdot.

Beyond the Black Box: Building Trust and Governance in the Age of AI

14 October 2025 at 08:00

Balancing innovation with ethical governance is crucial for ensuring fairness, accountability, and public trust in the age of intelligent machines.

The post Beyond the Black Box: Building Trust and Governance in the Age of AI appeared first on SecurityWeek.

Red, blue, and now AI: Rethinking cybersecurity training for the 2026 threat landscape

By: Greg Otto
14 October 2025 at 05:00

Cybersecurity today is defined by complexity. Threats evolve in real time, driven by AI-generated malware, autonomous reconnaissance, and adversaries capable of pivoting faster than ever.Β 

In a recent survey by DarkTrace of more than 1,500 cybersecurity professionals worldwide, nearly 74% said AI-powered threats are a major challenge for their organization, and 90% expect these threats to have a significant impact over the next one to two years.Β 

Meanwhile, many organizations are still operating with defensive models that were built for a more static world. These outdated training environments are ad hoc, compliance-driven, and poorly suited for the ever-changing nature of today’s security risks.

What’s needed now within organizations and cybersecurity teams is a transformation from occasional simulations to a daily threat-informed practice. This means changing from fragmented roles to cross-functional synergy and from a reactive defense to operational resilience.Β 

At the heart of that transformation lies Continuous Threat Exposure Management (CTEM), a discipline β€” not a tool or a project β€” that enables organizations to evolve in step with the threats they face.

Why traditional models no longer work

Legacy training models that include annual penetration tests, semi-annual tabletop exercises, and isolated red vs. blue events are no longer sufficient. They offer limited visibility, simulate too narrow a scope of attack behavior, and often check a compliance box without building lasting and strategic capabilities.

Even worse, they assume adversaries are predictable and unchanging. But as we know, AI-generated malware and autonomous reconnaissance have raised the bar. Threat actors are now faster, more creative, and harder to detect.Β 

Today’s attackers are capable of developing evasive malware and launching attacks that shift in real time. To meet this evolving threat environment, organizations must shift their mindset before they can shift their tactics.Β 

Embedding CTEM into daily practice

CTEM offers a fundamentally different approach. It calls for operationalized resilience, where teams systematically test, refine, and continually evolve their defensive posture daily.Β 

This is not done through broad-stroke simulations, but through atomic, context-aware exercises targeting individual techniques relevant to their specific threat landscape. This is also done one sub-technique at a time. Teams look at one scenario, then iterate, refine, and move to the next.Β 

This level of precision ensures organizations are training for the threats that actually matter β€” attacks that target their sector, their infrastructure, and their business logic. It also creates a steady rhythm of learning that helps build enduring security reflexes.

Real-time breach simulations: training under pressure

What separates CTEM from traditional testing is not just frequency, but authenticity. Real-time breach simulations aren’t hypothetical. These simulations are designed to replicate real adversarial behavior, intensity, and tactics. If they are done right, they mirror the sneakiness and ferocity of live attacks.

We should keep in mind that authenticity doesn’t just come from tools but also from the people designing the simulations. You can only replicate real-world threats if your SOC teams are keeping current with today’s threat landscape. Without that, simulations risk becoming just another theoretical exercise.Β 

These complex scenarios don’t just test defenses; they reveal how teams collaborate under pressure, how fast they detect threats, and whether their response protocols are aligned with actual threat behavior.

Analytics as a feedback loop

What happens after a simulation is just as important as the exercise itself. The post-simulation analytics loop offers critical insights into what worked, what didn’t, and where systemic weaknesses lie.Β 

Granular reporting is essential, as it allows organizations to identify issues with skills, processes, or coordination. By learning the specifics and gaining meaningful metrics β€” including latency in detection, success of containment, and coverage gaps β€” they can turn simulations into actionable intelligence.Β 

Over time, recurring exercises using similar tradecraft help measure progress with precision and determine if improvements are taking hold or if additional refinements are needed.

A blueprint for CISOs: building resilient, cross-functional teams

For CISOs and security leaders, adopting CTEM is not just about adding more tools β€” it’s about implementing culture, structure, and strategy.Β 

This is a blueprint for embedding CTEM into an organization’s security protocols:

  • Integrate tactical threat intelligence. Training must be based on real-world intelligence. Scenarios disconnected from the current threat landscape are at best inefficient, at worst misleading.
  • Align red and blue teams through continuous collaboration. Security is a team sport. Silos between offensive and defensive teams must be broken down. Shared learnings and iterative refinement cycles are essential.
  • Engage in simulation, not just instruction. Structured training is the foundation, but true readiness comes from cyber incident simulation. Teams need to move from knowing a technique to executing it under stress, in an operational context.
  • Establish CTEM as a daily discipline. CTEM must be part of the organization’s DNA and a continuous process. This requires organizational maturity, dedicated feedback loops, and strong process ownership.
  • Use metrics to drive learning. Evidence-based repetition depends on reliable data. Analytics from breach simulations should be mapped directly to skills development and tooling performance.

The role of AI in cybersecurity training

While attackers are already using AI to their advantage, defenders can use it too, but with care.Β 

AI isn’t a replacement for real-world training scenarios. Relying on it alone to create best-practice content is a mistake. What AI can do well is speed up content delivery, adapt to different learners, and personalize the experience.Β 

It can also identify each person’s weaknesses and guide them through custom learning paths that fill real skill gaps. In 2026, expect AI-driven personalization to become standard in professional development, aligning learner needs with the most relevant simulations and modules.

Beyond tools: making CTEM a culture

Ultimately, CTEM succeeds when it’s embraced not as a feature or a product but as a discipline woven into the daily practices of the organization.Β 

It also requires careful development. Red and blue teams must be open, transparent, and aligned. It’s not enough to simulate the threat. Security teams must also simulate to match an adversary’s intensity in order to build reflexes strong enough to withstand the real thing.Β 

The organizations that take this path won’t just respond faster to incidents β€” they’ll be able to anticipate and adapt and cultivate resilience that evolves as quickly as the threats do.

Dimitrios Bougioukas is vice president of training at Hack The Box, where he leads the development of advanced training initiatives and certifications that equip cybersecurity professionals worldwide with mission-ready skills.

The post Red, blue, and now AI: Rethinking cybersecurity training for the 2026 threat landscape appeared first on CyberScoop.

There's No 'AI Bubble', Says Yahoo Finance Executive Editor

12 October 2025 at 16:44
"I'm here to say we have to give these AI bubble predictions a rest," says Yahoo Finance executive editor Brian Sozzi. First of all, AI is a real technology being deployed in real ways inside of Corporate America. Second, this technology is requiring more physical assets in the ground β€” which are being built to support AI's real-world application. What Zach Dell (son of Michael Dell) is working on at startup Base Power (which just raised $1 billion) impressed me this week. It's addressing a key issue β€” power availability and costs in part because of rising stress on the grid due to AI development. Next, the spending on AI infrastructure doesn't strike me as reckless. I talk to CFOs and they walk me through their thinking, which seems logical. They aren't foaming at the mouth with wild-eyed predictions of grandeur similar to the late '90s. Plus, the tech giants making the biggest AI investments are fueling their ambitions by cash on hand β€” not loading up balance sheets with debt. The upstarts in AI are well funded, not being 100% stupid in their organizational build-outs. They're working on tangible technology that has actual orders behind it... Lastly here in my scolding of the AI worrywarts is that valuations don't support the warning calls. According to new research out of Goldman Sachs this week, the median forward P/E ratio across the Magnificent Seven is 27 times, or 26 times if excluding Tesla (TSLA), which has a much higher multiple than the other companies. This is roughly half the equivalent valuation of the biggest seven companies in the late 1990s, while the dominant companies in Japan (mostly banks) traded at higher valuations still. What's more, the current enterprise-to-sales ratios are also much lower than those of the dominant companies in the late 1990s. "So it is true that valuations are high but, in our view, generally not at levels that are as high as are typically seen at the height of a financial bubble," said Goldman Sachs strategist Peter Oppenheimer.

Read more of this story at Slashdot.

AI Slop? Not This Time. AI Tools Found 50 Real Bugs In cURL

12 October 2025 at 11:34
The Register reports: Over the past two years, the open source curl project has been flooded with bogus bug reports generated by AI models. The deluge prompted project maintainer Daniel Stenberg to publish several blog posts about the issue in an effort to convince bug bounty hunters to show some restraint and not waste contributors' time with invalid issues. Shoddy AI-generated bug reports have been a problem not just for curl, but also for the Python community, Open Collective, and the Mesa Project. It turns out the problem is people rather than technology. Last month, the curl project received dozens of potential issues from Joshua Rogers, a security researcher based in Poland. Rogers identified assorted bugs and vulnerabilities with the help of various AI scanning tools. And his reports were not only valid but appreciated. Stenberg in a Mastodon post last month remarked, "Actually truly awesome findings." In his mailing list update last week, Stenberg said, "most of them were tiny mistakes and nits in ordinary static code analyzer style, but they were still mistakes that we are better off having addressed. Several of the found issues were quite impressive findings...." Stenberg told The Register that about 50 bugfixes based on Rogers' reports have been merged. "In my view, this list of issues achieved with the help of AI tooling shows that AI can be used for good," he said in an email. "Powerful tools in the hand of a clever human is certainly a good combination. It always was...!" Rogers wrote up a summary of the AI vulnerability scanning tools he tested. He concluded that these tools β€” Almanax, Corgea, ZeroPath, Gecko, and Amplify β€” are capable of finding real vulnerabilities in complex code. The Register's conclusion? AI tools "when applied with human intelligence by someone with meaningful domain experience, can be quite helpful." jantangring (Slashdot reader #79,804) has published an article on Stenberg's new position, including recently published comments from Stenberg that "It really looks like these new tools are finding problems that none of the old, established tools detect."

Read more of this story at Slashdot.

❌
❌