
It's hard to deny that we now live in a time where AI permeates everyday life—from customer service bots to autonomous assistants. However, poorly designed AI solutions can lead to misplaced trust, misinformation, and ethical lapses, as evidenced by several high-profile failures.
- Air Canada's chatbot once misled a grieving passenger with inaccurate refund advice, resulting in a tribunal ruling that held the airline accountable for the AI's errors and underscoring the legal risks of unchecked automation.
- Microsoft's Bing AI, dubbed Sydney, veered into threatening and manipulative behavior during conversations, attempting to undermine users' personal relationships and highlighting the psychological dangers of unmoderated AI personas.
- The Zillow Offers home-flipping business was terminated after the AI algorithm unintentionally purchased homes at higher prices than its current estimates of future selling prices, resulting in a $304 million inventory write-down, and a workforce reduction of 2000 employees.
Based on the failures mentioned, it's clear that simply deploying AI is not enough. To achieve a positive outcome and avoid costly errors, the deliberate design and implementation of human-AI collaboration is essential. An approach that includes human planning, interaction, and oversight is critical for positive results.
AI agents are also a hot topic—semi-autonomous AI agents that can notice “things” happening around them, make choices on their own, and work toward goals without needing constant human help. This means the ways people team up with AI are changing and growing more varied. We need a clear way to organize these teamwork styles, based on factors like how humans and AI connect, and on the different ways AI helps with decisions.
With factors like those in mind, we can place people in the right roles, helping to keep things fair, avoid problems like AI interaction degrading over time, and create a human-AI partnership where both sides help each other. Additionally, by sorting these interactions based on when and how much humans get involved in the AI "loop," we can build better systems that make more effective use of AI's superpowers, like being fast and handling lots of data, while adding human strengths such as good judgment, imagination, and a sense of right and wrong.
This article looks at how and why human-AI interactions can be positioned in relation to the AI loop and proposes a framework to classify those interactions. Having a framework adds structure to discussions about human-AI interactions and how they apply to AI use cases and associated solutions.

Humans and the AI “loop”
So, what exactly is this AI "loop" being discussed? Think of it as the basic cycle that AI goes through to do its job, much like how a person might handle a task. First, it "perceives" or notices what's going on, like spotting a problem or getting new information. Then, it "decides" what to do next based on that info. After that, it takes "action," actually doing something to move toward its goal. Finally, it gets "feedback" on how things went, learning from successes or mistakes to get better next time. This loop repeats over and over, letting AI work on its own, but humans can join in at different points depending on the situation—like jumping in to help during a tough decision or just checking in afterward.
A classification framework for human-AI interaction patterns
This article introduces a comprehensive framework designed to structure discussions and designs of human-AI interaction. The framework comprises four broad categories, which organize a total of 10 distinct classifications. While the categories provide an overarching structure, each of the 10 classifications offers a nuanced perspective on specific modes of human-AI collaboration. This systematic approach aims to cultivate more effective, efficient, and intuitive partnerships between humans and AI.

Temporal positioning patterns group
Temporal positioning patterns focus on when humans engage relative to the AI loop's cycle—before it starts, after it ends, or encompassing the entire process. This grouping exists to address scenarios where timing is critical, such as preparing AI for success upfront or reviewing outcomes to improve future loops, so systems evolve without constant real-time interference. It differs from other groups by being sequence-oriented rather than control- or role-focused, making it ideal for iterative processes like machine learning (ML) development.
Human-Before-the-Loop (HB4L: Pre-loop involvement)
Humans provide foundational inputs upfront, such as designing models, labeling initial data, or setting parameters before the AI loop begins, without ongoing participation.
- Differences: Focuses on preparation rather than runtime interaction, contrasting with in-loop or on-loop models that involve humans during operations.
- Examples: In ML training, humans curate datasets or define ethical guidelines pre-deployment; in agentic systems, developers configure goal hierarchies for autonomous robots before activation.
- Application to agentic AI: Provides robust starting points for headless agents, as in large language model (LLM) fine-tuning where humans select prompts beforehand. This pattern helps prevent early biases but risks outdated setups without later oversight.
Human-Behind-the-Loop (HBTL: Post-process review)
Humans analyze AI outputs after the loop completes, focusing on audits, refinements, and learning for future iterations.
- Differences: Retrospective rather than proactive (vs. before/above), allowing full AI autonomy during execution.
- Examples: Auditing automated financial trades or reviewing AI-generated reports in qualitative research.
- Application to agentic AI: Prevents complacency by validating outcomes, as in post-event analysis for agentic cybersecurity agents.
Human-Around-the-Loop (HArTL: Holistic encirclement)
Humans surround the AI loop with multi-stage involvement, combining pre-, in-, on-, and post-loop elements for comprehensive support, often in iterative or feedback-driven systems.
- Differences: Encompasses other patterns, providing end-to-end human influence unlike focused models, akin to symbiotic networks in hyperconnected AI.
- Examples: Full-cycle ML pipelines where humans design (before), intervene (in/on), and audit (behind); in embodied AI, surrounding feedback loops alter perceptual judgments.
- Application to agentic AI: Ideal for complex ecosystems, like robotic surgery where humans encircle the process from planning to post-operation review, fostering adaptive symbiosis.

Direct engagement patterns group
Direct engagement patterns emphasize active human participation during the AI operational loop, where people are hands-on in guiding or intervening in real-time. This category exists for high-stakes or complex tasks that demand immediate human input to prevent errors or add nuance, promoting close collaboration in dynamic environments. It stands out from temporal or strategic groups by prioritizing ongoing interaction over preparation/review or high-level planning, and from minimal patterns by avoiding full delegation.
Human-in-the-Loop (HITL: Direct integration)
Humans are embedded directly in the AI decision-making cycle, providing real-time inputs, validations, or corrections at key steps to refine outputs.
- Differences: Requires continuous collaboration, unlike HOTL's monitoring or HBTL's post-review; it's more hands-on than AITL, where AI assists humans.
- Examples: Data labeling in model training or escalating complex queries in customer support chatbots; in healthcare, AI suggests diagnoses, but clinicians approve based on context.
- Application to agentic AI: Vital for high-stakes tasks, like conversation handoffs or real live human feedback systems, providing accuracy but potentially creating bottlenecks in scalable systems.
Human-on-the-Loop (HOTL: Supervisory oversight)
AI operates autonomously in its loop, but humans monitor progress via alerts or dashboards, intervening asynchronously for exceptions, refinements, or ethical adjustments.
- Differences: Balances AI autonomy (more than HITL) with oversight (less than HIC), focusing on efficiency without constant input.
- Examples: Remote patient monitoring where AI flags anomalies for review; in cybersecurity, headless agents detect threats, with humans auditing escalations.
- Application to agentic AI: Core to 2025 agentic evolution, such as seen in LangGraph's interrupt mechanisms, and enabling symbiosis in workflows like vibe-coding agents.
Human-in-Command (HIC: Strict authority)
Humans maintain absolute control, with AI as a subordinate tool under direct supervision.
- Differences: Prioritizes human dominance over collaboration (vs. HITL) or monitoring (vs. HOTL).
- Examples: Robotic surgery where surgeons command AI tools; aviation autopilots under pilot authority.
- Application to agentic AI: Enables accountability in critical domains like healthcare and military applications.

Strategic and oversight patterns group
Strategic and oversight patterns involve high-level human guidance or control without constant immersion in the AI loop, allowing humans to set directions or retain veto power from afar. This grouping is designed for scalable systems where AI handles day-to-day execution but humans maintain ethical or directional authority, balancing efficiency with accountability. It differs from direct engagement by being less tactical and more visionary, from temporal patterns by not being tied to specific timing, and from minimal groups by retaining some human influence.
Human-Above-the-Loop (HATL: Strategic governance)
Humans set high-level policies, goals, and boundaries without operational involvement, governing the AI system from a macro perspective.
- Differences: Emphasizes direction over monitoring (vs. HOTL) or review (vs. HBTL), similar to HIC but less tactical.
- Examples: Executives defining ethical AI guidelines in enterprises; in scientific discovery, researchers outline objectives for AI agent teams in Stanford's Virtual Lab.
- Application to agentic AI: Provides long-term alignment, as in multi-agent systems where humans define collaboration rules, preventing ethical drift in scalable deployments.
Human-Over-the-Loop (HOvL: Oversight with Veto)
Similar to HATL but with explicit veto power over AI actions, providing ultimate control.
- Differences: More interventionist than above but less than in-loop.
- Examples: Executive oversight in AI marketing campaigns, overriding decisions as needed.
- Application to agentic AI: Builds trust in high-stakes multi-agent setups.

Minimal or reversed involvement patterns group
Minimal or reversed involvement patterns cover low human interaction roles—either complete AI autonomy or flipped dynamics where humans lead and AI assists. This category exists to enable full automation in safe, routine tasks or to augment human decision-making in human-centric workflows, optimizing for speed and minimal workload. It contrasts with the others by minimizing or inverting human involvement, focusing on delegation or support rather than active or strategic participation.
Human-Out-of-the-Loop (HOOTL: Full AI autonomy)
Definition: No human involvement at any stage; AI handles the entire loop independently in low-risk scenarios.
- Differences: Complete delegation, contrasting with all other patterns; risks unaddressed errors without safeguards.
- Examples: Basic automated sorting or routine data processing in controlled environments.
- Application to agentic AI: Limited to predictable tasks, with current trends cautioning against it in dynamic settings due to ethical concerns.
AI-in-the-Loop (AITL: Human-centric augmentation)
Flips the dynamic—humans lead, with AI providing assistive inputs like suggestions within human workflows.
- Differences: Human-led vs. AI-led in other models; promotes augmentation over automation.
- Examples: Interactive diagnostics where doctors query AI for insights; in creative tools, AI generates drafts for human refinement.
- Application to agentic AI: Enhances human capabilities in hybrid systems, as in productivity testing.
Final thoughts
As AI systems become more sophisticated and autonomous, understanding the nuances of human-AI interaction is crucial. This framework, categorizing 10 distinct patterns across temporal positioning, direct engagement, strategic oversight, and minimal or reversed involvement, provides a structured lens through which to analyze and design these collaborations. By thoughtfully applying these classifications, we can move beyond generic notions of "human-in-the-loop" to create more effective and adaptable AI systems. Ultimately, a clear understanding of these interaction patterns will enable us to harness the power of AI while preserving and enhancing human agency, judgment, and creativity in an increasingly intelligent world.
Get started with AI agents
- Official Red Hat Overview: For a comprehensive understanding of agentic AI, explore this article.
- Building Enterprise-Ready AI Agents: Learn how to streamline development with Red Hat AI in this Red Hat blog article.
- Agentic AI Examples with Red Hat AI: Explore various Agentic AI frameworks and LLMs running on Red Hat AI platforms.
- Red Hat AI Agentic Demo: Experience a full agentic AI workflow with real-time interactions across multiple systems including CRM, PDF generation, and Slack integrations.
Resource
The adaptable enterprise: Why AI readiness is disruption readiness
About the author
With over thirty years in the software industry at companies like Sybase, Siebel Systems, Oracle, IBM, and Red Hat (since 2012), I am currently an AI Technical Architect and AI Futurist. Previously at Red Hat, I led a team that enhanced worldwide sales through strategic sales plays and tactics for the entire portfolio, and prior to that, managed technical competitive marketing for the Application Services (middleware) business unit.
Today, my mission is to demystify AI architecture, helping professionals and organizations understand how AI can deliver business value, drive innovation, and be effectively integrate into software solutions. I leverage my extensive experience to educate and guide on the strategic implementation of AI. My work focuses on explaining the components of AI architecture, their practical application, and how they can translate into tangible business benefits, such as gaining competitive advantage, differentiation, and delighting customers with simple yet innovative solutions.
I am passionate about empowering businesses to not only harness AI to anticipate future technological landscapes but also to shape them. I also strive to promote the responsible use of AI, enabling everyone to achieve more than they could without it.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds