COA Unit-1 Notes
COA Unit-1 Notes
UNIT-1
Faculty Name:- Mr. Md Ishfaque Ahmed
Computer: A computer can be defined as a fast electronic calculating machine that accepts the (data)
digitized input information process as per the list of internally stored instructions and produces the resulting
information.
List of instructions are called programs & internal storage is called computer memory.
1. Personal computers: - This is the most common type found in homes, schools, Business offices etc., It is the
most common type of desktop computers with processing and storage units along with various input and output
devices.
3. Work stations: - These have high resolution input/output (I/O) graphics capability, but with the same
dimensions as that of a desktop computer. These are used in engineering applications of interactive design
work.
4. Enterprise systems: - These are used for business data processing in medium to large corporations that
require much more computing power and storage capacity than work stations. The Internet associated with
servers have become a dominant worldwide source of all types of information.
5. Supercomputers: - These are used for large scale numerical calculations required in the applications like
weather forecasting etc.,
Functional unit: -
A computer consists of five functionally independent main parts input, memory, arithmetic logic unit (ALU),
output and control unit.
Input unit: - The source program/high level language program/coded information/simply data is fed to a
computer through input devices keyboard is a most common type. Whenever a key is pressed, one
corresponding word or number is translated into its equivalent binary code over a cable & fed either to
memory or processor. Joysticks, trackballs, mouse, scanners etc are other input devices.
Output unit:- These actually are the counterparts of the input unit. Its basic function is to send the processed
results to the outside world. Examples:- Printer, speakers, monitor etc.
Memory unit: - Its function into store programs and data. It is basically to two types
1. Primary memory
2. Secondary memory
1. Primary memory: - Is the one exclusively associated with the processor and operates at the electronics
speeds programs must be stored in this memory while they are being executed. The memory contains a large
number of semiconductors storage cells. Each capable of storing one bit of information. These are processed
in a group of fixed site called word. To provide easy access to a word in memory, a distinct address is associated
with each word location. Addresses are numbers that identify memory location. Number of bits in each word
is called the word length of the computer. Programs must reside in the memory during execution. Instructions
and data can be written into the memory or read out under the control of the processor.
● Memory in which any location can be reached in a short and fixed amount of time after specifying its
address is called random-access memory (RAM).
● The time required to access one word is called memory access time. Memory which is only readable
by the user and contents of which can’t be altered is called read only memory (ROM) ; it contains
the operating system.
● Caches are the small fast RAM units, which are coupled with the processor and are often contained on
the same IC chip to achieve high performance. Although primary storage is essential it tends to be
expensive.
2. Secondary memory: - Is used where large amounts of data & programs have to be stored, particularly
information that is accessed infrequently. Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s),
floppies etc.,
Arithmetic logic unit (ALU):- Most of the computer operators are executed in ALU of the processor like
addition, subtraction, division, multiplication, etc. the operands are brought into the ALU from memory and
stored in high speed storage elements called registers. Then according to the instructions the operation is
performed in the required sequence. The control and the ALU are many times faster than other devices
connected to a computer system. This enables a single processor to control a number of external devices such
as keyboards, displays, magnetic and optical disks, sensors and other mechanical controllers.
Control unit:- It effectively is the nerve center that sends signals to other units and senses their states. The
actual timing signals that govern the transfer of data between input unit, processor, memory and output unit
are generated by the control unit.
Interconnection structures:-
The collection of paths connecting the various modules is called the interconnecting structure.
• All the units must be connected
• Different type of connection for different type of unit
● Memory
● Input/Output
● CPU
• Memory Connection:
I/O Connection:
● Input
Computer architecture is a specification describing how computer software and hardware connect and interact
to create a computer network. It determines the structure and function of computers and the technologies it is
compatible with – from the central processing unit (CPU) to memory, input/output devices, and storage units.
Computer scientists must build a computer with the same principles in mind as building the foundations of
physical structure. The three main pillars they must consider are:
System design. This is what makes up the structure of a computer, including all hardware parts, such as CPU,
data processors, multiprocessors, memory controllers, and direct memory access.
Instruction set architecture (ISA). This is any software that makes a computer run, including the CPU’s
functions and capabilities, programming languages, data formats, processor register types, and instructions
used by programmers.
Microarchitecture. This defines the data processing and storage element or data paths. These include storage
devices and related computer organisation tools.
Despite the rapid advancement of computing, the fundamentals of computer architecture remain the same.
There are four main types of computer architecture: Von Neumann architecture, Harvard architecture,
Modified Harvard Architecture, and the RISC & CISC Architectures.
Named after mathematician and computer scientist John von Neumann, the Von Neumann architecture
features a single memory space for both data and instructions, which are fetched and executed sequentially.
This means that programs and data are stored in the same memory, allowing for flexible and easy modification
of programs.
But instructions are also fetched and executed one at a time, which creates a bottleneck where the CPU can't
fetch instructions and data simultaneously. This is known as the Von Neumann bottleneck. To address this,
modern CPUs employ techniques like caching and pipelining to improve efficiency.
Diagram showing Von Neumann architecture
Still, the Von Neumann architecture remains highly relevant and influential in computer design. Von
Neumann's architecture introduced the concept of stored-program computers, where both instructions and data
are stored in the same memory, allowing for flexible program execution.
2. Harvard architecture
Unlike the von Neumann architecture where instructions and data share the same memory and data paths,
Harvard architecture is a type of computer architecture that has separate storage units and dedicated pathways
for instructions and data. This allows for simultaneous access to instructions and data, potentially improving
performance.
By having separate pathways, the CPU can fetch instructions and access data at the same time, without waiting
for each other, leading to faster program execution, especially for tasks that involve a lot of data movement.
Separate memory units can be optimized for their specific purposes. For example, instruction memory might
be read-only, while data memory might be optimized for fast read/write operations.
Still, implementing separate storage and pathways can be more complex than the simpler von Neumann
architecture and having separate memory units can increase the overall cost of the system.
3. Modified Harvard Architecture
A Modified Harvard Architecture is a hybrid type of computer architecture that combines features of both the
classic Harvard architecture and the more widely used von Neumann architecture.
Like a true Harvard architecture, a modified Harvard architecture utilizes separate caches for instructions and
data. These caches are much faster than main memory, so frequently accessed instructions and data can be
retrieved quickly.
However, unlike the pure Harvard architecture where instructions and data have completely separate physical
memory units, a modified Harvard architecture keeps instructions and data in the same main memory.
This combination allows for simultaneous access to instructions and data, boosting performance over a
standard von Neumann architecture with a single cache. Compared to a true Harvard architecture with separate
memory units, the unified memory simplifies the design and reduces costs.
Many processors you'll find in computers today use a modified Harvard architecture with separate instruction
and data caches
RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) are two different
architectures for computer processors that determine how they handle instructions.
RISC processors are designed with a set of basic, well-defined instructions that are typically fixed-length and
easy for the processor to decode and execute quickly. The emphasis in RISC is on designing the hardware to
execute simple instructions efficiently, leading to faster clock speeds and potentially lower power
consumption. Examples of RISC processors include ARM processors commonly found in smartphones and
tablets, and MIPS processors used in some embedded systems.
CISC processors, however, have a wider range of instructions, including some very complex ones that can
perform multiple operations in a single instruction. This can be more concise for programmers but can take
the processor more time to decode and execute.
The goal of CISC is to provide a comprehensive set of instructions to handle a wide range of tasks, potentially
reducing the number of instructions a programmer needs to write. Examples of CISC processors include Intel’s
x86 processors, which are used in most personal computers, and Motorola 68000 family of processors which
are used in older Apple computers.
Instruction set - The instruction set is a predefined set of operations that the CPU is
designed to carry out. It's like a vocabulary for the processor.
Components of Computer Architecture
While computer architectures can differ greatly depending on the purpose of the computer, several key
components generally contribute to its structure. These include:
Central Processing Unit (CPU) - Often referred to as the "brain" of the computer, the CPU executes
instructions, performs calculations, and manages data. Its architecture dictates factors such as instruction set,
clock speed, and cache hierarchy, all of which significantly impact overall system performance.
Memory Hierarchy - This includes various types of memory, such as cache memory, random access memory
(RAM), and storage devices. The memory hierarchy plays a crucial role in optimizing data access times, as
data moves between different levels of memory based on their proximity to the CPU and the frequency of
access.
Input/Output (I/O) System - The I/O system enables communication between the computer and external
devices, such as keyboards, monitors, and storage devices. It involves designing efficient data transfer
mechanisms to ensure smooth interaction and data exchange.
Storage Architecture - This deals with how data is stored and retrieved from storage devices like hard drives,
solid-state drives (SSDs), and optical drives. Efficient storage architectures ensure data integrity, availability,
and fast access times.
Instruction Pipelining - Modern CPUs employ pipelining, a technique that breaks down instruction execution
into multiple stages. This allows the CPU to process multiple instructions simultaneously, resulting in
improved throughput.
Parallel Processing - This involves dividing a task into smaller subtasks and executing them concurrently,
often on multiple cores or processors. Parallel processing significantly accelerates computations, making it
key to tasks like simulations, video rendering, and machine learning.
All of the above parts are connected through a system bus consisting of the address bus, data bus and control
bus. The diagram below is an example of this structure:
Diagram depicting the structure of basic computer architecture with a uniprocessor CPU.
Challenges in Computer Architecture
As computers become more powerful, new challenges arise:
2) Scalability: As systems grow larger, architects face challenges in managing data traffic,
synchronisation, and latency across components. Ensuring that systems remain efficient,
responsive, and cost-effective at scale requires complex coordination and advanced design
strategies.
3) Security: As architectures become more advanced and interconnected, they also become more
susceptible to cybersecurity threats. Vulnerabilities at the hardware level (such as Spectre and
Meltdown) have shown that even the CPU can be a target.
4) Cost: Advanced computer architectures and cutting-edge manufacturing processes often come
with high development costs. These expenses can impact affordability and limit accessibility for
widespread deployment.
BUS
A group of lines that serve as a connecting port for several devices is called a bus.
Types of Computer Bus
There are a variety of buses found inside the computer.
Data Bus: The data bus allows data to travel back and forth between the microprocessor (CPU) and memory
(RAM).
Address Bus: The address bus carries information about the location (address) of data in memory.
Control Bus: The control bus carries the control signals that make sure everything is flowing smoothly from
place to place.
The traditional bus connection uses three buses local bus, system bus and expansion bus
1. Local bus connects the processor to cache memory and may support one or more local devices
2. The cache memory controller connects the cache to the local bus and to the system bus.
4. Input /output transfer to and from the main memory across the system bus do not interface with the processor
activity because the process accesses cache memory.
5. It is possible to connect I/O controllers directly on to the system bus. A more efficient solution is to make
use of one or more expansion buses for this purpose. An expansion bus interface buffers data transfer between
system bus and i/o controller on the expansion bus.
This arrangement allows the system to support a wide variety of I/O devices and at the same time insulate
memory to process or traffic from i/o traffic.
2>The bus may become a bottleneck as aggregate data transfer demand approaches the capacity of the bus.
Because data rate generated by attached devices like graphics and video controller are growing rapidly
3>Only one master bus can operate at a time, the other waits. To overcome this problem most computer system
use multiple buses, generally laid out in hierarchy.
BUS ARBITRATION:
● The device that is allowed to initiate data transfers on the bus at any given time is called the bus master.
In a computer system there may be more than one bus master such as processor, DMA controller etc.
● They share the system bus. When the current master relinquishes control of the bus, another bus master
can acquire the control of the bus.
● Bus arbitration is the process by which the next device to become the bus master is selected and bus
mastership is transferred to it. The selection of bus master is usually done on a priority basis.
● There are two approaches to bus arbitration: Centralized and distributed.
1. Centralized Arbitration
o In centralized bus arbitration, a single bus arbiter performs the required arbitration. The bus
arbiter may be the processor or a separate controller connected to the bus.
o There are three different arbitration schemes that use the centralized bus arbitration approach.
There schemes are:
a. Daisy chaining
b. Polling method
c. Independent request
Daisy Chaining
The system connections for Daisy chaining method are shown in fig below.
● It is simple and cheaper method. All masters make use of the same line for bus request.
● In response to the bus request the controller sends a bus grant if the bus is free.
● The bus grant signal serially propagates through each master until it encounters the first one that is
requesting access to the bus. This master blocks the propagation of the bus grant signal, activities the
busy line and gains control of the bus.
● Therefore any other requesting module will not receive the grant signal and hence cannot get the bus
access.
Advantages –
● Simplicity and Scalability.
● The user can add more devices anywhere along the chain, up to a certain maximum value.
Disadvantages –
● The value of priority assigned to a device is depends on the position of master bus.
● Propagation delay is arises in this method.
● If one device fails then entire system will stop working.
b) Polling method
● Also called as Rotating Priority method
● The system connections for polling method are shown in figure above.
● In this the controller is used to generate the addresses for the master. Number of address line
required depends on the number of master connected in the system.
● For example, if there are 8 masters connected in the system, at least three address lines are required.
● In response to the bus request controller generates a sequence of master address. When the
requesting master recognizes its address, it activated the busy line ad begins to use the bus.
Advantages –
● This method does not favor any particular device and processor.
● The method is also quite simple.
● If one device fails then entire system will not stop working.
Disadvantages –
● Adding bus masters is difficult as increases the number of address lines of the circuit.
c) Independent request
Advantages –
● This method generates fast response.
Disadvantages –
● Hardware cost is high as large no. of control lines are required
Computer Instructions
Computer instructions are a set of machine language instructions that a particular processor
understands and executes. A computer performs tasks on the basis of the instruction provided.
o The Operation code (Opcode) field which specifies the operation to be performed.
o The Address field which contains the location of the operand, i.e., register or memory
location.
o The Mode field which specifies how the operand will be located.
The Register-reference instructions are represented by the Opcode 111 with a 0 in the leftmost bit
(bit 15) of the instruction.
Note: The Operation code (Opcode) of an instruction refers to a group of bits that define arithmetic
and logic operations such as add, subtract, multiply, shift, and compliment.
A Register-reference instruction specifies an operation on or a test of the AC (Accumulator)
register.
Input-Output instruction
Just like the Register-reference instruction, an Input-Output instruction does not need a reference to
memory and is recognized by the operation code 111 with a 1 in the leftmost bit of the instruction.
These instructions are for communication between computer and outside environment. The IR(14 –
12) is 111 (differentiates it from memory reference) and IR(15) is 1 (differentiates it from register
reference instructions). The rest 12 bits specify I/O operation. The remaining 12 bits are used to
specify the type of the input-output operation or test performed.
Note
o The three operation code bits in positions 12 through 14 should be equal to 111. Otherwise,
the instruction is a memory-reference type, and the bit in position 15 is taken as the
addressing mode I.
o When the three operation code bits are equal to 111, control unit inspects the bit in position
15. If the bit is 0, the instruction is a register-reference type. Otherwise, the instruction is an
input-output type having bit 1 at position 15.
Instruction cycle
A program residing in the memory unit of a computer consists of a sequence of instructions. These
instructions are executed by the processor by going through a cycle for each instruction. In a basic
computer, each instruction cycle consists of the following phases:
An instruction is of various length depending upon the number of addresses it contain. Generally
CPU organizations are of three types on the basis of number of address fields:
1. Stack organization
The main two operations that are performed on the operators of the stack are Push and Pop. These
two operations are performed from one end only.
1. Push –This operation results in inserting one operand at the top of the stack and it decrease
the stack pointer register. The format of the PUSH instruction is:
PUSH
It inserts the data word at specified address to the top of the stack. It can be implemented as:
//decrem
ent SP by
1 SP <-- SP
-1
//store the content of specified memory address
//into SP; i.e, at
top of stack SP
<-- (memory
address)
2. Pop –
This operation results in deleting one operand from the top of the stack and it increase the stack
pointer register. The format of the POP instruction is:-
POP It deletes the data word at the top of the stack to the specified address. It can be implemented
as:
//into specified
memory location
(memory address)
<-- SP
//increment SP by 1
SP <-- SP + 1
Operation type instruction does not need the address field in this CPU organization.
This is because the operation is performed on the two operands that are on the top of
the stack.
For example:
SUB
This instruction contains the opcode only with no address field. It pops the two top data from the
stack, subtracting the data, and pushing the result into the stack at the top.
PDP-11, Intel’s 8085 and HP 3000 are some of the examples of the stack organized
computers.
Register Stack:
Memory Stack:
⮚ In the above discussion a stack can exist as a stand-alone unit. But in the CPU implementation of
a stack is done by assigning a portion of memory to a stack operation and using a processor
register as stack pointer.
⮚ The below figure shows a portion computer memory partitioned into three segments:
program, data, and stack.
⮚ The program counter PC points at the address of the next instruction in program.
⮚ The address register AR points at an array of data.
⮚ The stack pointer SP points at the top of the stack.
For example:
The computers, present in the early days of computer history, had accumulator based
CPUs. In this type of CPU organization, the accumulator register is used implicitly for
processing all instructions of a program and store the results into the accumulator. The
instruction format that is used by this CPU Organization is One address field. Due to
this the CPU is known as One Address Machine.
The main points about Single Accumulator based CPU Organization are:
1. In this CPU Organization, the first ALU operand is always stored into the
Accumulator and the second operand is present either in Registers or in the Memory.
2. Accumulator is the default address thus after data manipulation the results are
stored into the accumulator.
3. One address instruction is used in this type of organization.
The format of instruction is: Opcode + Address
Opcode indicates the type of operation to be performed.
Mainly two types of operation are performed in single accumulator based CPU
organization:
2. ALU operation –
In this type of operation, arithmetic operations are performed on the data.
For ex: MULT X
where X is the address of the operand. The MULT instruction in this example
performs the
operation,
AC <-- AC * M[X]
AC is the Accumulator and M[X] is the memory word located at location X.
This type of CPU organization is first used in PDP-8 processor and is used for process
control and laboratory applications. It has been totally replaced by the introduction of
the new general register based CPU.
Advantages –
· One of the operands is always held by the accumulator register. This results in
short instructions and less memory space.
· Instruction cycle takes less time because it saves time in instruction fetching
from memory.
Disadvantages –
· When complex expressions are computed, program size increases due to the
usage of many short instructions to execute it. Thus memory size increases.
· As the number of instructions increases for a program, the execution time
increases.