TeamPCP orchestrated one of the most sophisticated multi-ecosystem supply chain campaigns publicly documented to date. It cascaded through developer tooling and compromised LiteLLM and exposed how AI proxy services that concentrate API keys and cloud credentials become high-value collateral when supply chain attacks compromise upstream dependencies.
OpenClaw (aka Clawdbot or Moltbot) represents a new frontier in agentic AI: powerful, highly autonomous, and surprisingly easy to use. In this research, we examine how its capabilities compare to its predecessorsβ and highlight the security risks inherent to the agentic AI paradigm.
The "bigger is better" era of AI is hitting a wall. We are in an LLM bubble, characterized by ruinous inference costs and diminishing returns. The future belongs to Agentic AI powered by specialized Small Language Models (SLMs). Think of it as a shift from hiring a single expensive genius to running a highly efficient digital factory. Itβs cheaper, faster, and frankly, the only way to make agents work at scale.
Poisoned data. Malicious LoRAs. Trojan model files. AI attacks are stealthier than everβoften invisible until itβs too late. Hereβs how to catch them before they catch you.
A single AI chatbot breach at Salesloft-Drift exposed data from 700+ companies, including security leaders. The attack shows how AI integrations expand risk, and why controls like IP allow-listing, token security, and monitoring are critical.