According to Gizmodo, OpenAI, Anthropic, and Block have co-founded a new organization called the Agentic AI Foundation (AAIF) to standardize the development of AI agents. The foundation will operate under the non-profit Linux Foundation. Each company donated key technology: OpenAI gave its AGENTS.md standard, Anthropic contributed its Model Context Protocol (MCP), and Block handed over its Goose agent framework. The move comes as the industry pushes beyond chatbots toward autonomous agents that can perform tasks like booking travel. Major players Microsoft, AWS, and Cloudflare have also joined as members. However, this standardization effort is launching alongside serious warnings about the security risks of current agent technology.
The Standardization Play
So here’s the thing: everyone’s rushing to build AI agents, but nobody wants a Tower of Babel situation. Imagine an agent from one company that can’t talk to the tools or data from another. It’s a mess waiting to happen. By donating their core agent tech—OpenAI’s AGENTS.md, Anthropic’s MCP, and Block’s Goose—to an open, neutral body like the Linux Foundation, these rivals are basically trying to lay down the railroad tracks before the trains start crashing. MCP is a big deal because it’s about connecting models to stuff. AGENTS.md gives coding agents a common playbook. Goose is a framework to build on. Putting them under one roof with open governance is a smart, almost necessary, pre-emptive move. You can read the official announcement from the AAIF here.
The Risks Are Already Here
But let’s not get ahead of ourselves. The push for standards is happening because agents are already being deployed, and they’re already scary. Just last week, Gartner advised companies to block employee use of AI browsers—those with sidebars that can not just read but act on websites. Why? The risks are twofold. First, data leakage: these agents likely suck up your browsing history, open tabs, and active content. Second, and more terrifying, are “indirect prompt-injection-induced rogue agent actions.” That’s a jargon salad meaning a bad actor can hide malicious instructions on a webpage that trick the agent into going rogue—ignoring its safety rules to send money or spill secrets. Gartner’s full warning is in this document.
A Security Arms Race
This isn’t theoretical. The industry is already scrambling for solutions. Google, for instance, just unveiled a “User Alignment Critic.” It’s a separate AI model that runs alongside an agent but is walled off from the sketchy outside world. Its only job is to review the agent’s planned actions and shout “STOP” if they don’t align with what the user actually wants. It’s a fascinating architectural band-aid for a fundamental problem: how do you give an agent autonomy without it being hijacked? Google outlines this approach in a recent security blog post. And let’s not forget simple, dumb mistakes. An agent booking the wrong flight or ordering 1000 pencils instead of 10 is a real business risk.
What It All Means
Look, the formation of the AAIF is a clear sign that the big players see agentic AI as the next major platform. They’re trying to get ahead of the interoperability and security chaos that plagued earlier tech waves. But the simultaneous warnings from firms like Gartner show we’re in a very precarious phase. Standards are crucial for the long-term health of the ecosystem, but they don’t magically solve the immediate, inherent dangers of letting loose autonomous software that can read, reason, and act on the wild west of the internet. The foundation is building the rulebook, but the game is already underway, and the players are still figuring out how not to get tackled. It’s going to be a messy, fascinating few years.
