As we transition from traditional task-based automation to 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, understanding 𝘩𝘰𝘸 an agent cognitively processes its environment is no longer optional — it's strategic. This diagram distills the mental model that underpins every intelligent agent architecture — from LangGraph and CrewAI to RAG-based systems and autonomous multi-agent orchestration. The Workflow at a Glance 1. 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗶𝗼𝗻 – The agent observes its environment using sensors or inputs (text, APIs, context, tools). 2. 𝗕𝗿𝗮𝗶𝗻 (𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗘𝗻𝗴𝗶𝗻𝗲) – It processes observations via a core LLM, enhanced with memory, planning, and retrieval components. 3. 𝗔𝗰𝘁𝗶𝗼𝗻 – It executes a task, invokes a tool, or responds — influencing the environment. 4. 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (Implicit or Explicit) – Feedback is integrated to improve future decisions. This feedback loop mirrors principles from: • The 𝗢𝗢𝗗𝗔 𝗹𝗼𝗼𝗽 (Observe–Orient–Decide–Act) • 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 used in robotics and AI • 𝗚𝗼𝗮𝗹-𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 in agent frameworks Most AI applications today are still “reactive.” But agentic AI — autonomous systems that operate continuously and adaptively — requires: • A 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗹𝗼𝗼𝗽 for decision-making • Persistent 𝗺𝗲𝗺𝗼𝗿𝘆 and contextual awareness • Tool-use and reasoning across multiple steps • 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 for dynamic goal completion • The ability to 𝗹𝗲𝗮𝗿𝗻 from experience and feedback This model helps developers, researchers, and architects 𝗿𝗲𝗮𝘀𝗼𝗻 𝗰𝗹𝗲𝗮𝗿𝗹𝘆 𝗮𝗯𝗼𝘂𝘁 𝘄𝗵𝗲𝗿𝗲 𝘁𝗼 𝗲𝗺𝗯𝗲𝗱 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 — and where things tend to break. Whether you’re building agentic workflows, orchestrating LLM-powered systems, or designing AI-native applications — I hope this framework adds value to your thinking. Let’s elevate the conversation around how AI systems 𝘳𝘦𝘢𝘴𝘰𝘯. Curious to hear how you're modeling cognition in your systems.
Technology Integration in Strategy
Explore top LinkedIn content from expert professionals.
-
-
Turning Europe into a quantum industrial powerhouse Europe has been the cradle of quantum mechanics, the revolutionary science born from the genius of Max Planck, Albert Einstein, Niels Bohr, Erwin Schrödinger, and other visionaries who rewrote the rules of physical reality. On 2 July 2025, in the year marking a centenary since the initial development of quantum mechanics, the Commission has adopted an ambitious European Quantum Strategy, integrating Europe's unique scientific heritage with its vibrant quantum ecosystem of startups, SMEs, large industries, research and technology organisations, academia and research institutes. The mission is clear: turn Europe into a quantum industrial powerhouse that transforms breakthrough science into market-ready applications, while maintaining its scientific leadership. We are imagining a Union where medical scans can detect illnesses at the earliest stages, accelerating from weeks of uncertainty to mere seconds of precise diagnosis; where sensors are able to warn about volcanic activity or water shortages before they happen; and where unprecedented computational power will be available to solve complex problems in logistics, finance and climate modelling. A safer Europe, where our personal data, critical infrastructure, and businesses will always remain private and well-protected; where transport systems are optimised to reduce congestion and prevent accidents; and air travel is guided by quantum-enhanced precision navigation, pinpointing objects' locations down to the centimetre. A greener Europe, where sustainable energy grids can flawlessly manage millions of electric vehicles charging simultaneously overnight. These tangible, transformative technologies are within reach through support from the EU Quantum Strategy. The quantum community has clearly outlined what's needed to achieve this future: · Combine Europe's scientific excellence to bring quantum breakthroughs rapidly to market · Develop advanced quantum supercomputers like the ones we are supporting under the Quantum Flagship and are acquiring under the EuroHPC Joint Undertaking to operate as accelerators next to our leading network of supercomputers · Deploy secure communication networks such as those under EuroQCI, our secure quantum communication infrastructure that will be spanning the whole EU, composed of a terrestrial segment relying on fibre communications networks linking strategic sites at national and cross-border level, and a space segment based on satellites · Support quantum startups and SMEs, enhancing supply chain resilience, and foster supranational innovation clusters · Integrate quantum advancements into strategic capabilities for security and defence, protecting citizens and infrastructure · Educate Europe's workforce through specialised initiatives like the European Quantum Skills Academy Quantum is not one more technology to add to the list; is a high tide that will deeply transform our society and economy.
-
As I’ve been digging into the #CybersecurityFramework 2.0, and helping clients navigate the changes, I’ve found several areas where the new additions feel pretty significant. If you’re already using the #CSF and trying to figure out where to focus first, take note of these new Categories: ◾ The POLICY (GV.PO) Category was created to encompass ALL cybersecurity policies and guidance. Now, on one hand it might seem like a "well, of course" moment to consolidate all cybersecurity policies into one place - on the other hand, policies were previously sprinkled throughout the CSF, and were tied to specific actions like Asset Management or Incident Response. Now, it's all in one area, which makes a ton of sense and simplifies things, but also means we've got to remember that this one Category covers everything! ◾ Another significant addition is the PLATFORM SECURITY (PR.PS) Category which largely pulls together key topics from the previous Information Protection Processes & Procedures (PR.IP) and Protective Technology (PR.PT) focusing on security protections around broader platform types (hardware, software, virtual, etc.). If you’re looking for things like configuration management, maintenance, and SDLC – you’ll now find them here. ◾ The TECHNOLOGY INFRASTRUCTURE RESILIENCE (PR.IR) Category pulls largely from the previous Information Protection Processes & Procedures (PR.IP) and Protective Technology (PR.PT) as well, but also pulls in key aspects from Data Security (PR.DS). This new Category highlights the need for managing an organization’s security architecture and includes security protections around networks as well as your environment to ensure resource capacity, resilience, etc. So, what does all this mean for your organization? Whether you're just starting out, or you're looking to refine your existing cybersecurity strategies, CSF 2.0 offers a more streamlined framework to use to bolster your cyber resilience. Remember, staying ahead in cybersecurity is a continuous journey of adaptation and improvement. Embrace these changes as an opportunity to review and enhance your cybersecurity posture, leveraging the expanded resources and guidance provided by #NIST! Have you seen the updated mapping NIST released from v1.1 to v2.0? Check it out here to get started and “directly download all the Informative References for CSF 2.0” 👇 https://lnkd.in/e3F6hn9Y
-
📣 Salesforce + Informatica just turned the platform into the most complete AI operating system for the enterprise. For years, companies have been buying “AI” without fixing the hard problems underneath. Most organisations still struggle with three things: ❌ Fragmented systems ❌ Untrusted or poorly governed data ❌ AI that works in demos, but collapses in real workflows Salesforce now owns the only stack where all three layers work as one: • MuleSoft → Integration • Informatica → Data governance + quality • Agentforce → Autonomous AI execution This matters because real enterprise AI isn’t about chatbots or copilots. It’s about AI that can reason, act, and take responsibility across business processes safely. What this unlocks for enterprises: 1️⃣ A unified digital nervous system: Every event, signal, record, and workflow becomes machine-readable and immediately actionable. No stitching. No fragile automation. No “integration spaghetti.” 2️⃣ Trusted data becomes the default Cleanliness, lineage, policies, MDM, observability, and governance, all applied before AI ever touches the data. That’s how you get audit-ready AI decisions instead of hallucinations. 3️⃣ Real AI agents not copilots pretending to be agents Most “AI agents” today can only reply to text. With this stack, agents can: • Start workflows • Update systems • Trigger transactions • Coordinate between apps • Enforce policy and controls as they act This is the first enterprise platform where AI doesn’t just generate an answer. It carries the action all the way into the systems that run your business. 4️⃣ A single metadata layer across the enterprise: This is the piece most leaders underestimate. Metadata is the context AI needs to be useful. Salesforce now owns end-to-end metadata: → APIs, data lineage, relationships, rules, identity, and usage patterns. → That’s the foundation for explainable AI, governed automation, and cross-system intelligence. 5️⃣ A composable enterprise ready for 2026 and beyond: The next competitive edge won’t come from apps. It’ll come from AI that can safely orchestrate processes across applications. Salesforce is positioning itself as the OS that runs that future. My take? This is no longer about CRM, integration, or analytics. It’s the architecture for autonomous enterprises. MuleSoft brings the connectivity Informatica brings the trust Agentforce brings the intelligence If you’re shaping your 2026 roadmap, this is the moment to rethink: • how your data flows • how trust is enforced • how AI will act across your systems Because the companies that get this right won’t just automate tasks, they’ll redesign how their business works. Why this is a game-changing move: ✅ Slack brought the interface. ✅ MuleSoft brought the integration. ✅ Tableau brought the insight. ✅ Convergence will smooth autonomous execution. ✅ Informatica now brings the data backbone. MuleSoft Community
-
According to the 𝟐𝟎𝟐𝟒 𝐒𝐭𝐚𝐭𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐂𝐈𝐎 𝐒𝐮𝐫𝐯𝐞𝐲 by Foundry, 𝟕𝟓% of CIOs find it challenging to strike the right balance between these two critical areas. This difficulty is notably higher in sectors such as education (𝟖𝟐%) and manufacturing (𝟕𝟖%), and less so in retail (𝟓𝟒%). (Source: https://lnkd.in/ebsed9i7) 𝐖𝐡𝐲 𝐓𝐡𝐢𝐬 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞 𝐄𝐱𝐢𝐬𝐭𝐬: The increasing emphasis on digital transformation and artificial intelligence (AI) is driving the need for innovation. In 2024, 28% of CIOs reported that their primary CEO-driven objective was to lead digital business initiatives, a significant increase from the previous year. This push towards innovation often competes with the imperative to maintain operational excellence, including upgrading IT and data security and enhancing IT-business collaboration. 𝐓𝐡𝐞 𝐈𝐦𝐩𝐚𝐜𝐭 𝐨𝐧 𝐎𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬: The tension between innovation and operational excellence can lead to a misallocation of resources if not managed correctly. It can result in either stifling innovation due to overemphasis on day-to-day operations or risking operational integrity by over-prioritizing disruptive technological advancements. For instance, sectors with a high focus on operational challenges, such as education and healthcare, particularly emphasize IT security and business alignment over aggressive innovation. 𝐀𝐝𝐯𝐢𝐜𝐞 𝐟𝐨𝐫 𝐂𝐈𝐎𝐬: • 𝐄𝐦𝐛𝐫𝐚𝐜𝐞 𝐚 𝐃𝐮𝐚𝐥 𝐀𝐠𝐞𝐧𝐝𝐚: Get used to it! CIOs should advocate for an IT strategy that equally prioritizes operational excellence and innovation. This involves not only leading digital transformation projects, but also ensuring that these innovations deliver tangible business outcomes without compromising the operational integrity of the organization. • 𝐒𝐭𝐫𝐞𝐧𝐠𝐭𝐡𝐞𝐧 𝐈𝐓 𝐚𝐧𝐝 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: Strengthening the collaboration between IT and other business units remains a top priority. CIOs should work closely with business leaders to ensure that technological initiatives are well-aligned with business goals, thereby enhancing the overall strategic impact of IT. • 𝐃𝐞𝐯𝐞𝐥𝐨𝐩 𝐚 𝐅𝐥𝐞𝐱𝐢𝐛𝐥𝐞 𝐑𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐀𝐥𝐥𝐨𝐜𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥: To manage the dynamic demands of both innovation and operational tasks effectively, CIOs should adopt a flexible resource allocation model. This model would allow the IT department to shift resources quickly between innovation-driven projects and core IT functions, depending on the business priorities at any given time. ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!
-
This image is from an Amazon Braket slide deck that just did the rounds of all the Deep Tech conferences I've been at recently (this one from Eric Kessler). It's more profound than it might seem. As technical leaders, we're constantly evaluating how emerging technologies will reshape our computational strategies. Quantum computing is prominent in these discussions, but clarity on its practical integration is... emerging. It's becoming clear however that the path forward isn't about quantum versus classical, but how quantum and classical work together. This will be a core theme for the year ahead. As someone now on the implementation partner side of this work, and getting the chance to work on specific implementations of quantum-classical hybrid workloads, I think of it this way: Quantum Processing Units (QPUs) are specialised engines capable of tackling calculations that are currently intractable for even the largest supercomputers. That's the "quantum 101" explanation you've heard over and over. However, missing from that usual story, is that they require significant classical infrastructure for: - Control and calibration - Data preparation and readout - Error mitigation and correction frameworks - Executing the parts of algorithms not suited for quantum speedup Therefore, the near-to-medium term future involves integrating QPUs as accelerators within a broader classical computing environment. Much like GPUs accelerate specific AI/graphics tasks alongside CPUs, QPUs are a promising resource to accelerate specific quantum-suited operations within larger applications. What does this mean for technical decision-makers? Focus on Integration: Strategic planning should center on identifying how and where quantum capabilities can be integrated into existing or future HPC workflows, not on replacing them entirely. Identify Target Problems: The key is pinpointing high-value business or research problems where the unique capabilities of quantum computation could provide a substantial advantage. Prepare for Hybrid Architectures: Consider architectures and software platforms designed explicitly to manage these complex hybrid workflows efficiently. PS: Some companies like Quantum Brilliance are focused on this space from the hardware side from the outset, working with Pawsey Supercomputing Research Centre and Oak Ridge National Laboratory. On the software side there's the likes of Q-CTRL, Classiq Technologies, Haiqu and Strangeworks all tackling the challenge of managing actual workloads (with different levels of abstraction). Speaking to these teams will give you a good feel for topic and approaches. Get to it. #QuantumComputing #HybridComputing #HPC
-
📌 European Quantum Industry Consortium (QuIC) has published the "Strategic Industry Roadmap 2025: A Shared Vision for Europe’s Quantum Future" This massive document (218 pages) covers all quantum technologies (Computing and simulation, Communications, and Sensing and Metrology). For each of them it presents an overview, describes the state of the art and provides roadmaps to 2035 split in Immediate future (2025), Near term (2025-2029), and Long term (2030-2035). All sections end with a summary under "Key messages". There is a chapter also for enabling technologies, a piece often missing in other reports. The same schema as before describes the landscape for cryogenics, photonics, and control electronics. The last part covers more policy related contents: Workforce development, Standards, Intellectual Property, Funding in Europe, Governance Principles, Sustainability and Ethical Values. This document is presented as a companion to the "QuIC Position Paper - Recommendations for the EU Quantum Strategy" presented some weeks ago. Strategic Industry Roadmap: https://lnkd.in/dSy7dQdd Recommendations for the EU Quantum Strategy: https://lnkd.in/dcKhnnvH #quantum #quantumcomputing #quantumcommunications #quantumsensing
-
As Mario Draghi’s report released today demonstrates, the EU is falling behind global rivals because of limited innovation. Since 2019, the EU has created over 100 pieces of digital regulation. Whether you’re a technology startup or a small retailer, regulatory complexity is a minefield. Developing, launching or just using technology is harder in Europe than elsewhere in the world. Of course, “anything goes” is not an option and rules are required - but the EU is holding itself back at a time where it could be thriving. Our research with Public First shows that generative AI alone could add €1.2 trillion to the European economy. Much of Google’s innovation is led from Europe. We work with talented European entrepreneurs, businesses and innovators every day and see first-hand the benefits that the single market could yield for them. But a new approach is needed if Europe is not to miss the moment. Here’s what needs to change: 1️⃣ Shift from regulatory growth to economic growth: Europe doesn’t just create a huge number of regulations related to digital society - the regulations they create are often conflicting, untested and inconsistently implemented. The explosion of rules makes it almost impossible for Europe to create and nurture the next tech unicorns. Draghi is right that the EU now needs to focus on enabling innovation: promoting the use of digital technologies to innovate and drive through breakthrough advances. 2️⃣ Invest in R&D: To compete in AI, the EU needs to prioritise research and development, working with the private sector to incentivise it and make funding more accessible. The EU currently lags behind the US, Israel, South Korea, Japan, the UK and China on R&D investment. Without the right incentives to develop and roll out new technology, Europe is stifling its talent. 3️⃣ Build the right infrastructure: AI breakthroughs are only possible with the right computing technologies and data centres - plus the renewable energy to run them. So the EU needs to allocate more funding towards financing such infrastructure, as well as incentivising and enabling the private sector to do the same. 4️⃣ Prioritise skills & education: People will need support to seize the benefits of AI in their work and life. A revitalised European Skills Agenda should put skills and education at the centre, while AI should be added to school curriculums. Google wants to help Europe seize the benefits of innovation. Over the last decade, we’ve worked hand in hand with Governments to build new technology responsibly; train over 13 million Europeans in digital skills; and support over €179 billion in economic activity across the EU. As a European, I’m proud of this work, but I know there’s much more to do. Read Draghi’s report here: https://lnkd.in/epBxtymw
-
The Pentagon Just Handed American Drone Startups a $1 Billion Golden Ticket On July 10, SECDEF dropped a memo that changes everything for drone manufacturers. Combined with Trump's June 6 executive order, we're witnessing the most radical shift in defense procurement since World War II. Here's what just happened: The Pentagon ripped up years of red tape that kept innovative companies out of defense contracts. Now they're treating small drones (under 55 pounds) like ammunition - expendable, mass-produced, and urgently needed. The numbers are staggering: • Every Army squad gets attack drones by FY2026 • Production target: Millions of units annually • Weaponization approvals: Cut from years to 30 days • Battery certifications: Down to one week For companies eyeing this opportunity, here's your roadmap: Step 1: Compliance First (Immediate) Ensure NDAA compliance - zero Chinese components. Review the Blue UAS Framework. This isn't negotiable. One foreign chip kills your entire opportunity. Step 2: Prototype Fast (12-18 months) Build modular systems under 55 pounds. Think swappable payloads for ISR or strike missions. The 18 prototypes showcased on July 17 averaged 18 months of development vs. the traditional 6 years. Step 3: Get Certified (Ongoing) Apply to DIU's Blue UAS program. This is your fastest path to approved vendor status. The memo expands this list with AI-managed updates coming in 2026. Step 4: Find Your Entry Point (30-90 days) • Respond to the Army's July 8 solicitation for low-cost systems • Partner with established primes as a subcontractor • Target frontline units are now empowered to buy directly Step 5: Scale Smart (By 2026) Secure private funding. Explore DoD purchase commitments. Participate in the new drone test zones launching in 90 days. The brutal reality? We're playing catch-up. China produces 90% of commercial drones globally. But that's precisely why this opportunity exists. The Pentagon needs American manufacturers desperately. Watch for these challenges: • Supply chain constraints for non-Chinese components • Fierce competition from AeroVironment and Kratos • Higher production costs vs. Chinese competitors • Maintaining cybersecurity while moving fast Stock prices tell the story - drone companies surged 15-40% after the announcement. Private capital is flooding in. America is building a new arsenal, and drones are the foundation. If you have manufacturing capability, AI expertise, or can build at scale, this is your Manhattan Project moment. The difference? This time, we know exactly what we're building and why. The window is open. But it won't stay that way.
-
Agentic AI Design Patterns are emerging as the backbone of real-world, production-grade AI systems, and this is gold from Andrew Ng Most current LLM applications are linear: prompt → output. But real-world autonomy demands more. It requires agents that can reflect, adapt, plan, and collaborate, over extended tasks and in dynamic environments. That’s where the RTPM framework comes in. It's a design blueprint for building scalable agentic systems: ➡️ Reflection ➡️ Tool-Use ➡️ Planning ➡️ Multi-Agent Collaboration Let’s unpack each one from a systems engineering perspective: 🔁 1. Reflection This is the agent’s ability to perform self-evaluation after each action. It's not just post-hoc logging—it's part of the control loop. Agents ask: → Was the subtask successful? → Did the tool/API return the expected structure or value? → Is the plan still valid given current memory state? Techniques include: → Internal scoring functions → Critic models trained on trajectory outcomes → Reasoning chains that validate step outputs Without reflection, agents remain brittle, but with it, they become self-correcting systems. 🛠 2. Tool-Use LLMs alone can’t interface with the world. Tool-use enables agents to execute code, perform retrieval, query databases, call APIs, and trigger external workflows. Tool-use design involves: → Function calling or JSON schema execution (OpenAI, Fireworks AI, LangChain, etc.) → Grounding outputs into structured results (e.g., SQL, Python, REST) → Chaining results into subsequent reasoning steps This is how you move from "text generators" to capability-driven agents. 📊 3. Planning Planning is the core of long-horizon task execution. Agents must: → Decompose high-level goals into atomic steps → Sequence tasks based on constraints and dependencies → Update plans reactively when intermediate states deviate Design patterns here include: → Chain-of-thought with memory rehydration → Execution DAGs or LangGraph flows → Priority queues and re-entrant agents Planning separates short-term LLM chains from persistent agentic workflows. 🤖 4. Multi-Agent Collaboration As task complexity grows, specialization becomes essential. Multi-agent systems allow modularity, separation of concerns, and distributed execution. This involves: → Specialized agents: planner, retriever, executor, validator → Communication protocols: Model Context Protocol (MCP), A2A messaging → Shared context: via centralized memory, vector DBs, or message buses This mirrors multi-threaded systems in software—except now the "threads" are intelligent and autonomous. Agentic Design ≠ monolithic LLM chains. It’s about constructing layered systems with runtime feedback, external execution, memory-aware planning, and collaborative autonomy. Here is a deep-dive blog is you would like to learn more: https://lnkd.in/dKhi_n7M
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development