0% found this document useful (0 votes)
49 views7 pages

Computer System Architecture Lecture Notes

The document provides an overview of computer system architecture, focusing on the von Neumann machine, which integrates data and instructions in a single memory. It discusses the components essential for performance, such as the arithmetic and logic unit, control unit, and memory, as well as advancements in microprocessor technology and strategies for improving performance. Additionally, it outlines the basic functions of a computer, including instruction fetch and execution processes.

Uploaded by

Abdullah Azar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views7 pages

Computer System Architecture Lecture Notes

The document provides an overview of computer system architecture, focusing on the von Neumann machine, which integrates data and instructions in a single memory. It discusses the components essential for performance, such as the arithmetic and logic unit, control unit, and memory, as well as advancements in microprocessor technology and strategies for improving performance. Additionally, it outlines the basic functions of a computer, including instruction fetch and execution processes.

Uploaded by

Abdullah Azar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

lOMoARcPSD|52310946

Computer system architecture lecture notes

Computer Systems Architectures (University of Greenwich)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Abdullah Azar (abdullahazar03@gmail.com)
lOMoARcPSD|52310946

THE VON NEUMANN MACHINE


The task of entering and altering programs for the ENIAC was extremely tedious. The
programming process can be easy if the program could be represented in a form suitable for
storing in memory alongside the data. Then, a computer could get its instructions by reading
them from memory, and a program could be set or altered by setting the values of a portion
of memory.

• A main memory, which stores both data and instruction


• An arithmetic and logic unit (ALU) capable of operating on binary data

• A control unit, which interprets the instructions in memory and causes them to be
executed
• Input and output (I/O) equipment operated by the control unit

This structure was outlined in von Neumann’s earlier proposal, which is worth quoting at
this point:

First: Because the device is primarily a computer, it will have to perform the elementary
operations of arithmetic most frequently. At any rate a central arithmetical part of the
device will probably have to exist and this constitutes the first specific part: CA.

Downloaded by Abdullah Azar (abdullahazar03@gmail.com)


lOMoARcPSD|52310946

Second: The logical control of the device, that is, the proper sequencing of its operations,
can be most efficiently carried out by a central control organ. By the central control and the
organs which perform it form the second specific part: CC

Third: Any device which is to carry out long and complicated sequences of operations
(specifically of calculations) must have a considerable memory . . . At any rate, the total
memory constitutes the third specific part of the device: M.

Fourth: The device must have organs to transfer . . . information from R into its specific parts
C and M. These organs form its input, the fourth specific part: I

Fifth: The device must have organs to transfer . . . from its specific parts C and M into R.
These organs form its output, the fifth specific part: O.

The control unit operates the IAS by fetching instructions from memory and executing them
one at a time. A more detailed structure diagram is shown in Figure 1.2. This figure reveals
that both the control unit and the ALU contain storage locations, called registers, defined as
follows:

• Memory buffer register (MBR): Contains a word to be stored in memory or sent to


the I/O unit, or is used to receive a word from memory or from the I/O unit.
• Memory address register (MAR): Specifies the address in memory of the word to be
written from or read into the MBR.
• Instruction register (IR): Contains the 8-bit opcode instruction being executed.
• Instruction buffer register (IBR): Employed to hold temporarily the right-hand
instruction from a word in memory.
• Program counter (PC): Contains the address of the next instruction-pair to be
fetched from memory.
• Accumulator (AC) and multiplier quotient (MQ): Employed to hold temporarily
operands and results of ALU operations.

DESIGNING FOR PERFORMANCE:

Year by year, the cost of computer systems continues to drop dramatically, while the
performance and capacity of those systems continue to rise equally dramatically. Desktop
applications that require the great power of today’s microprocessor-based systems include

• Image processing
• Speech recognition
• Videoconferencing
• Multimedia authoring
• Voice and video annotation of files

Downloaded by Abdullah Azar (abdullahazar03@gmail.com)


lOMoARcPSD|52310946

• Simulation modeling

Microprocessor Speed

The evolution of Microprocessors continues to bear out Moore’s law. So long as this law
holds, chipmakers can unleash a new generation of chips every three years—with four times
as many transistors. In microprocessors, the addition of new circuits, and the speed boost
that comes from reducing the distances between them, has improved performance four- or
fivefold every three years or so since Intel launched its x86 family in 1978. The more
elaborate techniques for feeding the monster into contemporary processors are the
following:
• Branch prediction: The processor looks ahead in the instruction code fetched from
memory and predicts which branches, or groups of instructions, are likely to be processed
next
• Data flow analysis: The processor analyzes which instructions are dependent on
each other’s results, or data, to create an optimized schedule of instructions
• Speculative execution: Using branch prediction and data flow analysis, some
processors speculatively execute instructions ahead of their actual appearance in the
program execution, holding the results in temporary locations.

Performance Balance :

While processor power has raced ahead at breakneck speed, other critical components of
the computer have not kept up.The result is a need to look for performance balance: an
adjusting of the organization and architecture to compensate for the mismatch among the
capabilities of the various components.

The interface between processor and main memory is the most crucial pathway in the entire
computer because it is responsible for carrying a constant flow of program instructions and
data between memory chips and the processor.

There are a number of ways that a system architect can attack this problem, all of which are
reflected in contemporary computer designs. Consider the following examples:

• Increase the number of bits that are retrieved at one time by making DRAMs “wider”
rather than “deeper” and by using wide bus data paths.
• Change the DRAM interface to make it more efficient by including a cache7 or other
buffering scheme on the DRAM chip.
• Reduce the frequency of memory access by incorporating increasingly complex and
efficient cache structures between the processor and main memory.
• Increase the interconnect bandwidth between processors and memory by using
higherspeed buses and by using a hierarchy of buses to buffer and structure data flow.

Downloaded by Abdullah Azar (abdullahazar03@gmail.com)


lOMoARcPSD|52310946

Improvements in Chip Organization and Architecture:

There are three approaches to achieving increased processor speed:

• Increase the hardware speed of the processor.


• Increase the size and speed of caches that are interposed between the processor
and main memory. In particular, by dedicating a portion of the processor chip itself
to the cache, cache access times drop significantly.
• Make changes to the processor organization and architecture that increase the
effective speed of instruction execution.

However, as clock speed and logic density increase, a number of obstacles become more
significant:

• Power: As the density of logic and the clock speed on a chip increase, so does the
power density.
• RC delay: The speed at which electrons can flow on a chip between transistors is
limited by the resistance and capacitance of the metal wires connecting them;
specifically, delay increases as the RC product increases. As components on the chip
decrease in size, the wire interconnects become thinner, increasing resistance. Also,
the wires are closer together, increasing capacitance.
• Memory latency: Memory speeds lag processor speeds.

Beginning in the late 1980s, and continuing for about 15 years, two main strategies have
been used to increase performance beyond what can be achieved simply by increasing clock
speed. First, there has been an increase in cache capacity. Second, the instruction execution
logic within a processor has become increasingly complex to enable parallel execution of
instructions within the processor.

Two noteworthy design approaches have been pipelining and superscalar. A pipeline works
much as an assembly line in a manufacturing plant enabling different stages of execution of
different instructions to occur at the same time along the pipeline. A superscalar approach
in essence allows multiple pipelines within a single processor so that instructions that do not
depend on one another can be executed in parallel.

COMPUTER COMPONENTS

Virtually all contemporary computer designs are based on concepts developed by John von
Neumann at the Institute for Advanced Studies, Princeton. Such a design is referred to as
the von Neumann architecture and is based on three key concepts:

• Data and instructions are stored in a single read–write memory.

Downloaded by Abdullah Azar (abdullahazar03@gmail.com)


lOMoARcPSD|52310946

• The contents of this memory are addressable by location, without regard to the type
of data contained there.
• Execution occurs in a sequential fashion (unless explicitly modified) from one
instruction to the next.

If there is a particular computation to be performed, a configuration of logic components


designed specifically for that computation could be constructed. The resulting “program” is
in the form of hardware and is termed a hardwired program.

COMPUTER FUNCTIONS:

The basic function performed by a computer is execution of a program, which consists of a


set of instructions stored in memory. Instruction processing consists of two steps: The
processor reads ( fetches) instructions from memory one at a time and executes each
instruction. Program execution consists of repeating the process of instruction fetch and
instruction execution.

Instruction Fetch and Execute

At the beginning of each instruction cycle, the processor fetches an instruction from
memory. The program counter (PC) holds the address of the instruction to be fetched next,
the processor always increments the PC after each instruction fetch so that it will fetch the
next instruction in sequence.

For example, consider a computer in which each instruction occupies one 16-bit word of
memory. If the program counter is set to location 300. The processor will next fetch the
instruction at location 300. On next instruction cycles, it will fetch instructions from
locations 301,302,303,and so on.

The fetched instruction is loaded into a register in the processor known as the instruction
register (IR). The processor interprets the instruction and performs the required action. In
general, these actions fall into four categories:

• Processor-memory: Data may be transferred from processor to memory or from


memory to processor.
• Processor-I/O: Data may be transferred to or from a peripheral device by
transferring between the processor and an I/O module.
• Data processing: The processor may perform some arithmetic or logic operation on
data.
• Control: An instruction may specify that the sequence of execution be altered. For
example, the processor may fetch an instruction from location 149, which specifies

Downloaded by Abdullah Azar (abdullahazar03@gmail.com)


lOMoARcPSD|52310946

that the next instruction be from location 182. The processor will remember this fact
by setting the program counter to 182.Thus,on the next fetch cycle, the instruction
will be fetched from location 182 rather than 150.

An instruction’s execution may involve a combination of these actions. Consider a simple


example using a hypothetical machine that includes the characteristics listed in Figure 1.6.
The processor contains a single data register, called an accumulator (AC). Both instructions
and data are 16 bits long. Thus, it is convenient to organize memory using 16-bit words. The
instruction format provides 4 bits for the opcode, so that there can be as many as 24 = 16
different opcodes, and up to 212 = 4096 (4K) words of memory can be directly addressed.

Downloaded by Abdullah Azar (abdullahazar03@gmail.com)

You might also like