0% found this document useful (0 votes)
100 views17 pages

Understanding AI Agents and Their Types

The document provides an overview of agents in Artificial Intelligence, defining an agent as an entity that perceives its environment and acts upon it using sensors and actuators. It categorizes agents into various types, including human, robotic, and software agents, and outlines the structure of an AI agent using the PEAS representation. Additionally, it discusses different types of AI agents based on their intelligence and capabilities, such as simple reflex agents, goal-based agents, and learning agents, while also describing the characteristics of the environments in which these agents operate.

Uploaded by

Arnab Pahari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views17 pages

Understanding AI Agents and Their Types

The document provides an overview of agents in Artificial Intelligence, defining an agent as an entity that perceives its environment and acts upon it using sensors and actuators. It categorizes agents into various types, including human, robotic, and software agents, and outlines the structure of an AI agent using the PEAS representation. Additionally, it discusses different types of AI agents based on their intelligence and capabilities, such as simple reflex agents, goal-based agents, and learning agents, while also describing the characteristics of the environments in which these agents operate.

Uploaded by

Arnab Pahari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Agents in Artificial Intelligence

What is an Agent?

An agent can be anything that perceive its environment through sensors


and act upon that environment through actuators. An Agent runs in the
cycle of perceiving, thinking, and acting. An agent can be:

• Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.

• Robotic Agent: A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators.

• Software Agent: Software agent can have keystrokes, file contents as


sensory input and act on those inputs and display output on the screen.
• Sensor: Sensor is a device which detects the change in the environment
and sends the information to other electronic devices.
• Actuators: Actuators are the component of machines that converts energy
into motion. The actuators are only responsible for moving and controlling
a system.
• Effectors: Effectors are the devices which affect the environment.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an
environment using sensors and actuators for achieving goals.
Rules for an AI agent:
• Rule 1: An AI agent must have the ability to perceive the environment.
• Rule 2: The observation must be used to make decisions.
• Rule 3: Decision should result in an action.
• Rule 4: The action taken by an AI agent must be a rational action.
Rational Agent:

• A rational agent is said to perform the right things. AI is about creating


rational agents to use for game theory and decision theory for various
real-world scenarios.
• Rational agents in AI are very similar to intelligent agents.
Structure of an AI Agent

• The structure of an intelligent agent is a combination of architecture and


agent program. It can be viewed as:
Agent = Architecture + Agent program

• Following are the main three terms involved in the structure of an AI


agent:
Architecture: Architecture is machinery that an AI agent executes
on.
Agent program: Agent program is an implementation of agent
function.

Agent Function: Agent function is used to map a percept to an


action.

f:P* → A
PEAS Representation

PEAS is a type of model on which an AI agent


works upon.
• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
Example :

PEAS for self-driving cars:


Let's suppose a self-driving car then PEAS representation will be:

• Performance: Safety, time, legal drive, comfort


• Environment: Roads, other vehicles, road signs
• Actuators: Steering, accelerator, brake, signal, horn
• Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
Types of AI Agents

An intelligent agent is a program that can make decisions or perform a


service based on its environment, user input and experiences.
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.

• Simple Reflex Agent


• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple Reflex Agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of
current precept.
• Their environment is completely observable.
• Condition-Action Rule − It is a rule that maps a state (condition) to an
action.
Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an
internal state.
These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
Updating the state requires the information about −
• How the world evolves.
• How the agent’s actions
affect the world.
Goal Based Agents

It has a goal and has a strategy to reach that goal. All actions are taken to
reach this goal. More precisely, from a set of possible actions, it selects the
one that improves the progress towards the goal (not necessarily the best
one).
Utility Based Agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
Learning Agents
A learning agent is an agent capable of learning from experience. It has
the capability of automatic information acquisition and integration into
the system. Any agent designed and expected to be successful in an
uncertain environment is considered to be learning agent.
Agent Environment in AI
• It is not a part of an agent itself. An environment can be described as a
situation in which an agent is present.
• The environment is where agent lives, operate and provide the agent with
something to sense and act upon it.
• Features of Environment
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Accessible vs Inaccessible
1. Fully Observable / Partially Observable − If it is possible to determine the
complete state of the environment at each time point from the percepts it is
observable; otherwise it is only partially observable.
Examples:
– Chess – the board is fully observable, and so are the opponent’s moves.
– Driving – the environment is partially observable because what’s around
the corner is not known.

2. Static / Dynamic − If the environment does not change while an agent is


acting, then it is static; otherwise it is dynamic.

3. Discrete / Continuous − If there are a limited number of distinct, clearly


defined, states of the environment the environment is discrete (For example,
chess); otherwise it is continuous (For example, driving).
Examples:
– The game of chess is discrete as it has only a finite number of moves.
– Self-driving cars are an example of continuous environments as their
actions are driving, parking, etc. which cannot be numbered.
4. Deterministic / Non-deterministic - When a uniqueness in the agent’s
current state completely determines the next state of the agent, the
environment is said to be deterministic otherwise it is non-deterministic.
Examples:
– Chess – there would be only a few possible moves for a coin at the
current state and these moves can be determined.
– Self-Driving Cars- the actions of a self-driving car are not unique, it
varies time to time.
5. Single agent / Multiple agents − An environment consisting of only one
agent is said to be a single-agent environment. An environment involving
more than one agent is a multi-agent environment.
6. Episodic / Non-episodic − In an episodic environment, there is a series of
one-shot actions, and only the current percept is required for the action.
However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.
7. Accessible / Inaccessible − If an agent can obtain complete and accurate
information about the state's environment, then such an environment is
called an Accessible environment else it is called inaccessible.

You might also like