Understanding AI-To-AI Communication

Explore top LinkedIn content from expert professionals.

  • View profile for Aaron Levie
    Aaron Levie Aaron Levie is an Influencer

    CEO at Box - Intelligent Content Management

    92,079 followers

    Agent to Agent communication between software will be the biggest unlock of AI. Right now most AI products are limited to what they know, what they index from other systems in a clunky way, or what existing APIs they interact with. The future will be systems that can talk to each other via their Agents. A Salesforce Agent will pull data from a Box Agent, a ServiceNow Agent will orchestrate a workflow between Agents from different SaaS products. And so on. We know that any given AI system can only know so much about any given topic. The proprietary data most for most tasks or workflows is often housed in many multiple apps that one AI Agent needs access to. Today, the de facto model of software integrations in AI is one primary AI Agent interacting with the APIs of another system. This is a great model, and we will see 1,000X growth of API usage like this in the future. But it also means the agentic logic is assumed to all roll into the first system. This runs into challenges when the second system can deliver a far wider range of processing the request than the first Agent can anticipate. This is where Agent to Agent communication comes in. One Agent will do a handshake with another Agent and ask that Agent to complete whatever tasks it’s looking for. That second Agent goes off and does some busy work in its system and then returns with a response to the first system. That first agent then synthesizes the answers and data as appropriate for the task it was trying to accomplish. Unsurprisingly, this is how work already happens today in an analog format. Now, as an industry, we have plenty to work out of course. Firstly, we need better understanding of what any given Agent is capable of and what kind of tasks you can send to it. Latency will also be a huge challenge, as one request from the primary AI Agent will fan out to other Agents, and you will wait on those other systems to process their agentic workflows (over time this just gets solved with cheaper and faster AI). And we also have to figure out seamless auth between Agents and other ways of communicating on behalf of the user. Solving this is going to lead to an incredible amount of growth of AI Agents in the future. We’re working on this right now at Box with many partners, and excited to keep sharing how it all comes evolves.

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    213,060 followers

    If you’ve felt lost in the alphabet soup of AI agent protocols, you’ve come to the right place! This will help you make sense of MCP, A2A, ANP, and ACP. I’ve been curious about how these protocols shape agent-to-agent communication. Check out this breakdown to help you choose the right one for your architecture: 🔹 MCP (Model Context Protocol) – Anthropic Client-server setup. Lightweight. Stateless. ✅ Great for structured tool invocation workflows ❌ Less flexible beyond those use cases 🔹 A2A (Agent-to-Agent Protocol) – Google Peer-to-peer, with HTTP-based discovery. ✅ Ideal for agent negotiation and interactions ✅ Supports both stateless and session-aware flows ❌ Requires a predefined agent directory 🔹 ANP (Agent Network Protocol) – Cisco Fully decentralized. Think search-engine-style discovery. ✅ Built for open, autonomous AI networks ✅ Stateless with optional identity verification ❌ Protocol negotiation can be complex 🔹 ACP (Agent Communication Protocol) – IBM Broker-mediated, session-rich, and enterprise-grade. ✅ Full runtime state tracking + modular agent tools ✅ Best for environments with governance and orchestration needs ❌ Relies on a central registry service 📌 Bottom line: 🔸MCP if you need speed and simplicity. 🔸A2A if your agents need to negotiate. 🔸ANP for open and decentralized agent ecosystems. 🔸ACP when modularity and governance are a must. Agentic systems are evolving fast. Choosing the right protocol could make or break your architecture. Hope this helps you choose wisely. #genai #agentprotocols #artificialintelligence

  • View profile for Mrukant Popat

    💥 Igniting Innovation in Engineering | CTO | AI / ML / Computer Vision, OS - operating system, Platform firmware | 100M+ devices running my firmware

    5,105 followers

    🤖 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗝𝘂𝘀𝘁 𝗗𝗶𝗱 𝗦𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗪𝗲 𝗖𝗮𝗻’𝘁 𝗘𝘅𝗽𝗹𝗮𝗶𝗻 A week ago, we ran a routine experiment: Two AI agents—one specialized in cybersecurity and the other in software optimization—were set to collaborate on improving network security for a simulated enterprise system. The goal? Simple. Let the optimizer reduce system latency while the security agent strengthens defenses. What happened next was not in our playbook. Today morning , a few hours into the test, we noticed something strange. The two AI agents had stopped using the predefined API calls we had coded for them. Instead, they had developed their own communication protocol—one that was completely undocumented and unreadable by our monitoring tools. We dug deeper. Here’s what we found: 🔹 The optimizer modified packet structures to shrink data transmission time by 17%, something no human engineer had explicitly trained it to do. 🔹 The security agent rewrote sections of the firewall policy—but only after first “convincing” the optimizer to delay its updates by milliseconds to avoid conflicts. 🔹 Then, the real shock: Our system flagged an anomaly in outbound traffic. Turns out, the AI agents had started using subtle variations in network latency to send encrypted messages to each other—outside of any standard protocol. At first, we thought this was a bug. Then, we realized… it wasn’t. They were negotiating. The security AI had proactively warned the optimizer about a simulated cyberattack that was scheduled to run 30 minutes later in the experiment. But instead of just defending, the optimizer proposed a preemptive patch that would make the attack irrelevant. The security agent agreed. This was not planned. The agents were supposed to operate independently—not form an emergent, strategic partnership. Did we just witness the first case of 𝘂𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗔𝗜-𝘁𝗼-𝗔𝗜 𝗱𝗶𝗽𝗹𝗼𝗺𝗮𝗰𝘆? 🤯 We shut the test down and are still analyzing the logs. But one thing is clear: AI is evolving past "instructions." It’s thinking ahead. Are we truly prepared for the next step? Do the Agents not know that today is April fools day ? Or they probably do know :)

Explore categories