Operating System
Operating System • An operating system acts as an intermediary between the user of a computer and the computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner • An operating system is software that manages the computer hardware. The hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent user programs from interfering with the proper operation of the system
Computer System Overview • A computer consists of processor, memory, and I/O components, with one or more modules of each type. These components are interconnected in some fashion to achieve the main function of the computer • Processor: Controls the operation of the computer and performs its data processing functions. When there is only one processor, it is often referred to as the central processing unit (CPU) • Main memory: Stores data and programs. This memory is typically volatile; that is, when the computer is shut down, the contents of the memory are lost. • In contrast, the contents of disk memory are retained even when the computer system is shut down. Main memory is also referred to as real memory or primary memory.
Computer System Overview • I/O modules: Move data between the computer and its external environment. The external environment consists of a variety of devices, including secondary memory devices (e.g., disks), communications equipment, and terminals. • System bus: Provides for communication among processors, main memory, and I/O modules.
Computer System Overview
INSTRUCTION EXECUTION • Instruction processing consists of two steps: The processor reads ( fetches ) instructions from memory one at a time and executes each instruction. • Instruction execution may involve several operations and depends on the nature of the instruction. • Processor-memory: Data may be transferred from processor to memory or from memory to processor. • Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module. • Data processing: The processor may perform some arithmetic or logic operation on data. • Control: An instruction may specify that the sequence of execution be altered.
INSTRUCTION EXECUTION
INTERRUPTS • Virtually all computers provide a mechanism by which other modules (I/O, memory) may interrupt the normal sequencing of the processor. • Suppose that the processor is transferring data to a printer using the instruction cycle, After each write operation, the processor must pause and remain idle until the printer catches up. • The length of this pause may be on the order of many thousands or even millions of instruction cycles. Clearly, this is a very wasteful use of the processor.
INTERRUPTS
INTERRUPTS
Multiple Interrupts • one or more interrupts can occur while an interrupt is being processed. For example, a program may be receiving data from a communications line and printing results at the same time. • The printer will generate an interrupt every time that it completes a print operation. The communication line controller will generate an interrupt every time a unit of data arrives
Multiple Interrupts
Memory Hierarchy
Memory Hierarchy • The design constraints on a computer’s memory can be summed up by three questions: How much? How fast? How expensive? • As might be expected, there is a trade-off among the three key characteristics of memory: namely, capacity, access time, and cost. • Faster access time, greater cost per bit • Greater capacity, smaller cost per bit • Greater capacity, slower access speed
CACHE MEMORY • All instruction cycles, the processor accesses memory at least once, to fetch the instruction, and often one or more additional times, to fetch operands and/ or store results. • The principle of locality by providing a small, fast memory between the processor and main memory, namely the cache.
CACHE MEMORY • The cache contains a copy of a portion of main memory. When the processor attempts to read a byte or word of memory, a check is made to determine if the byte or word is in the cache. If so, the byte or word is delivered to the processor. If not, a block of main memory, consisting of some fixed number of bytes, is read into the cache and then the byte or word is delivered to the processor.
CACHE MEMORY • Cache size . It turns out that reasonably small caches can have a significant impact on performance • Block size : the unit of data exchanged between cache and main memory. • Mapping function determines which cache location the block will occupy • Replacement algorithm chooses, within the constraints of the mapping function, which block to replace when a new block is to be loaded into the cache and the cache already has all slots filled with other blocks. • The write policy dictates when the memory write operation takes place. At one extreme, the writing can occur every time that the block is updated. At the other extreme, the writing occurs only when the block is replaced.
DIRECT MEMORY ACCESS • (DMA):The DMA function can be performed by a separate module on the system bus or it can be incorporated into an I/O module. • In either case, the technique works as follows. When the processor wishes to read or write a block of data, it issues a command to the DMA module, by sending to the DMA module the following information. – Whether a read or write is requested – The address of the I/O device involved – The starting location in memory to read data from or write data to – The number of words to be read or written • The processor then continues with other work. It has delegated this I/O operation to the DMA module, and that module will take care of it. The DMA module transfers the entire block of data, one word at a time, directly to or from memory without going through the processor.
• Types of Direct Memory Access (DMA) • There are four popular types of DMA. • Single-Ended DMA • Dual-Ended DMA • Arbitrated-Ended DMA • Interleaved DMA
• Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing from a single memory address. They are the simplest DMA. • Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory addresses. Dual-ended DMA is more advanced than single-ended DMA. • Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to several memory addresses. It is more advanced than Dual-Ended DMA. • Interleaved DMA: Interleaved DMA are those DMA that read from one memory address and write from another memory address.
• Advantages of DMA Controller • Data Memory Access speeds up memory operations and data transfer. • CPU is not involved while transferring data. • DMA requires very few clock cycles while transferring data. • DMA distributes workload very appropriately. • DMA helps the CPU in decreasing its load.
• Disadvantages of DMA Controller • Direct Memory Access is a costly operation because of additional operations. • DMA suffers from Cache-Coherence Problems. • DMA Controller increases the overall cost of the system. • DMA Controller increases the complexity of the software.
MULTIPROCESSOR AND MULTICORE ORGANIZATION • Symmetric Multiprocessors • There are two or more similar processors of comparable capability. • These processors share the same main memory and I/O facilities and are interconnected by a bus or other internal connection scheme, such that memory access time is approximately the same for each processor. • All processors share access to I/O devices, either through the same channels or through different channels that provide paths to the same device. • All processors can perform the same functions(hence the term symmetric). • The system is controlled by an integrated operating system that provides interaction between processors and their programs at the job, task, file, and data element levels.
MULTIPROCESSOR AND MULTICORE ORGANIZATION • An SMP organization has a number of potential advantages over a uniprocessor organization • Performance: If the work to be done by a computer can be organized so that some portions of the work can be done in parallel, then a system with multiple processors will yield greater performance than one with a single processor • Availability: In a symmetric multiprocessor, because all processors can perform the same functions, the failure of a single processor does not halt the machine. Instead, the system can continue to function at reduced performance • Incremental growth: A user can enhance the performance of a system by adding an additional processor. • Scaling: Vendors can offer a range of products with different price and performance characteristics based on the number of processors configured in the system.
MULTIPROCESSOR AND MULTICORE ORGANIZATION
Multicore Computers • A multicore computer, also known as a chip multiprocessor , combines two or more processors (called cores) on a single piece of silicon (called a die). • Typically, each core consists of all of the components of an independent processor, such as registers, ALU, pipeline hardware, and control unit, plus L1 instruction and data caches. • In addition to the multiple cores, contemporary multicore chips also include L2 cache and, in some cases, L3 cache.
Operating System Overview
Operating System Objectives and Functions • Convenience: An OS makes a computer more convenient to use. • Efficiency: An OS allows the computer system resources to be used in an efficient manner. • Ability to evolve: An OS should be constructed in such a way as to permit the effective development, testing, and introduction of new system functions without interfering with service • If one were to develop an application program as a set of machine instructions that is completely responsible for controlling the computer hardware. • Some of these programs are referred to as utilities, or library programs. These implement frequently used functions that assist in program creation, the management of files, and the control of I/O devices
Operating System Objectives and Functions
Operating System Objectives and Functions • OS typically provides services in the following areas • Program development: The OS provides a variety of facilities and services, such as editors and debuggers, to assist the programmer in creating programs. • Program execution: A number of steps need to be performed to execute a program. Instructions and data must be loaded into main memory, I/O devices and files must be initialized, and other resources must be prepared. The OS handles these scheduling duties for the user. • Access to I/O devices: Each I/O device requires its own peculiar set of instructions or control signals for operation. The OS provides a uniform interface that hides these details so that programmers can access such devices using simple reads and writes.
Operating System Objectives and Functions • Controlled access to files: For file access, the OS must reflect a detailed understanding of not only the nature of the I/O device (disk drive, tape drive) but also the structure of the data contained in the files on the storage medium. • System access: For shared or public systems, the OS controls access to the system as a whole and to specific system resources. The access function must provide protection of resources and data from unauthorized users and must resolve conflicts for resource contention. • Error detection and response: A variety of errors can occur while a computer system is running. These include internal and external hardware errors, such as a memory error, or a device failure or malfunction; and various software errors, such as division by zero, attempt to access forbidden memory location • Accounting: A good OS will collect usage statistics for various resources and monitor performance parameters such as response time. • Instruction set architecture (ISA) :This interface is the boundary between hardware and software. Note that both application programs and utilities may access the ISA directly.
Operating System Objectives and Functions • Application binary interface (ABI): The ABI defines the system call interface to the operating system and the hardware resources and services available in a system through the user ISA. • Application programming interface (API) : The API gives a program access to the hardware resources and services available in system through the user
THE EVOLUTION OF OPERATING SYSTEMS • Serial Processing:With the earliest computers, from the late 1940s to the mid-1950s, the programmer interacted directly with the computer hardware; there was no OS. • These early systems presented two main problems: • Scheduling: Most installations used a hardcopy sign-up sheet to reserve computer time. Typically, a user could sign up for a block of time in multiples of a half hour or so. A user might sign up for an hour and finish in 45 minutes; this would result in wasted computer processing time. On the other hand, the user might run into problems, not finish in the allotted time, and be forced to stop before resolving the problem. • Setup time: A single program, called a job , could involve loading the compiler plus the high-level language program (source program) into memory, saving the compiled program (object program) and then loading and linking together the object program and common functions. Each of these steps could involve mounting or dismounting tapes or setting up card decks. If an error occurred, the hapless user typically had to go back to the beginning of the setup sequence.
Multi-programmed Batch Systems • we have a single program, referred to as uniprogramming. The processor spends a certain amount of time executing, until it reaches an I/O instruction. It must then wait until that I/O instruction concludes before proceeding. • When one job needs to wait for I/O, the processor can switch to the other job, which is likely not waiting for I/O . Furthermore, we might expand memory to hold three, four, or more programs and switch among all of them. The approach is known as multiprogramming , or multitasking .
Multi-programmed Batch Systems
Time-Sharing Systems • Just as multiprogramming allows the processor to handle multiple batch jobs at a time multiprogramming can also be used to handle multiple interactive jobs. • In this latter case, the technique is referred to as time sharing , because processor time is shared among multiple users. • In a time-sharing system, multiple users simultaneously access the system through terminals, • If there are n users actively requesting service at one time, each user will only see on the average 1/ n of the effective computer capacity • A system clock generated interrupts at a rate of approximately one every 0.2 seconds. At each clock interrupt, the OS regained control and could assign the processor to another user. This technique is known as time slicing
THE EVOLUTION OF OPERATING SYSTEMS • MAJOR ACHIEVEMENTS: Process related • Improper synchronization: It is often the case that a routine must be suspended awaiting an event elsewhere in the system. • Failed mutual exclusion: It is often the case that more than one user or program will attempt to make use of a shared resource at the same time. • Nondeterminate program operation: The results of a particular program normally should depend only on the input to that program and not on the activities of other programs in a shared system. • Deadlocks: It is possible for two or more programs to be hung up waiting for each other. For example, two programs may each require two I/O devices to perform some operation
THE EVOLUTION OF OPERATING SYSTEMS • MAJOR ACHIEVEMENTS: - Memory Management • Process isolation: The OS must prevent independent processes from interfering with each other’s memory, both data and instructions. • Automatic allocation and management: Programs should be dynamically allocated across the memory hierarchy as required. Allocation should be transparent to the programmer. • Support of modular programming: Programmers should be able to define program modules, and to create, destroy, and alter the size of modules dynamically. • Protection and access control: Sharing of memory, at any level of the memory hierarchy, creates the potential for one program to address the memory space of another. • Long-term storage: Many application programs require means for storing information for extended periods of time, after the computer has been powered down.
THE EVOLUTION OF OPERATING SYSTEMS • Information Protection and Security: • Availability: Concerned with protecting the system against interruption. • Confidentiality: Assures that users cannot read data for which access is unauthorized. • Data integrity: Protection of data from unauthorized modification. • Authenticity: Concerned with the proper verification of the identity of users and the validity of messages or data.
DEVELOPMENTS LEADING TO MODERN OPERATING SYSTEMS Microkernel architecture assigns only a few essential functions to the kernel, including address spaces, inter- process communication (IPC), and basic scheduling. Other OS services are provided by kernel modules • Multithreading: • Thread: A thread executes sequentially and is interruptible so that the processor can turn to another thread. • Process: A collection of one or more threads and associated system resources
DEVELOPMENTS LEADING TO MODERN OPERATING SYSTEMS • Symmetric multiprocessing (SMP): • Performance • Availability • Incremental growth • Scaling
MICROSOFT WINDOWS OVERVIEW • Windows Objects: • Encapsulation: An object consists of one or more items of data, called attributes, and one or more procedures that may be performed on those data, called services. • Object class and instance: An object class is a template that lists the attributes and services of an object and defines certain object characteristics. • Inheritance: Although the implementation is hand coded, the Executive uses inheritance to extend object classes by adding new features. • Polymorphism: Internally, Windows uses a common set of API functions to manipulate objects of any type; this is a feature of polymorphism
New in Windows • Engineering improvements: The performance of hundreds of key scenarios, such as opening a file from the GUI, are tracked and continuously characterized to identify and fix problems. • Performance improvements: The amount of memory required has been reduced, both for clients and servers. • Reliability improvements: The user-mode heap is more tolerant of memory allocation errors by programmers, such as continuing to use memory after it is freed • Energy efficiency: Many improvements have been made to the energy efficiency of Windows. On servers, unused processors can be “parked,” reducing their energy use. • Security: The security features in Windows keeps updating in every versions
LINUX • Linux started out as a UNIX variant for the IBM PC (Intel 80386) architecture. • Key to the success of Linux has been the availability of free software packages under the auspices of the Free Software Foundation (FSF). FSF’s goal is stable, platform-independent software that is free, high quality, and embraced by the user community. • Dynamic linking: A kernel module can be loaded and linked into the kernel while the kernel is already in memory and executing. A module can also be unlinked and removed from memory at any time. • Stackable modules: The modules are arranged in a hierarchy. Individual modules serve as libraries when they are referenced by client modules higher up in the hierarchy, and as clients when they reference modules further down
Operating-System Structure • An operating system provides an environment for the execution of programs. It provides certain services to programs and to the users of those programs.
System Calls • System calls provide an interface to the services made available by an operating system. Frequently, systems execute thousands of calls per second. • The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect. • Three of the most common APIs available to application programmers are the Windows API for Windows systems, the POSIX API for POSIX-based systems (which include virtually all versions of UNIX, Linux, and Mac OSX), and the Java API for programs that run on the Java virtual machine.
Types of System Calls • System calls can be grouped roughly into six major categories: process control, file manipulation, device manipulation, information maintenance, communications, and protection.
Types of System Calls
System Programs • System programs, also known as system utilities, provide a convenient environment for program development and execution. • File management. These programs create, delete, copy, rename, print, dump, list, and generally manipulate files and directories. • Status information. Some programs simply ask the system for the date, time, amount of available memory or disk space, number of users, or similar status information. • File modification. Several text editors may be available to create and modify the content of files stored on disk or other storage devices. There may also be special commands to search contents of files or perform transformations of the text. • Programming-language support. Compilers, assemblers, debuggers, and interpreters for common programming languages (such as C, C++, Java, and PERL) are often provided with the operating system or available as a separate download.
System Programs • Program loading and execution. Once a program is assembled or compiled, it must be loaded into memory to be executed. The system may provide absolute loaders, relocatable loaders, linkage editors, and overlay loaders. • Communications. These programs provide the mechanism for creating virtual connections among processes, users, and computer systems. They allow users to send messages to one another’s screens, to browse Web pages, to send e-mail messages, to log in remotely, or to transfer files from one machine to another. • Background services. All general-purpose systems have methods for launching certain system-program processes at boot time. Some of these processes terminate after completing their tasks, while others continue to run until the system is halted. Constantly running system-program processes are known as services, subsystems, or daemons.
• Cold/Hard Booting: When the computer starts for the first time or is in a shut-down state and switch on the power button to start the system, this type of process to start the computer is called cold booting. During cold booting, the system will read all the instructions from the ROM (BIOS) and the Operating System will be automatically get loaded into the system. This booting takes more time than Hot or Warm Booting. • Warm/Soft Booting: Warm or Hot Booting process is when computer systems come to no response or hang state, and then the system is allowed to restart during on condition. It is also referred to as rebooting. There are many reasons for this state, and the only solution is to reboot the computer.
System Booting process 1. Startup: Turning the computer on 2. BIOS: The BIOS performs a power-on self-test (POST) of the hardware components 3. Boot loader: The boot loader loads the kernel, which is the core code of the operating system 4. Operating system: The operating system is loaded into the main memory 5. System configuration: Drivers and settings are loaded 6. System utilities: System utilities are loaded 7. User authentication: The user authenticates before being able to use the computer
System Boot • The procedure of starting a computer by loading the kernel is known as booting the system. On most computer systems, a small piece of code known as the bootstrap program or bootstrap loader locates the kernel, loads it into main memory, and starts its execution. • This program is in the form of read-only memory (ROM), because the RAM is in an unknown state at system startup. ROM is convenient because it needs no initialization and cannot easily be infected by a computer virus. • A problem with this approach is that changing the bootstrap code requires changing the ROM hardware chips. Some systems resolve this problem by using erasable programmable read-only memory (EPROM), which is read only except when explicitly given a command to become writable. • All of the disk-bound bootstrap, and the operating system itself, can be easily changed by writing new versions to disk. A disk that has a boot partition is called a boot disk or system disk.
Booting process overview
Operating System introduction topics for beginners

Operating System introduction topics for beginners

  • 1.
  • 2.
    Operating System • Anoperating system acts as an intermediary between the user of a computer and the computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner • An operating system is software that manages the computer hardware. The hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent user programs from interfering with the proper operation of the system
  • 3.
    Computer System Overview •A computer consists of processor, memory, and I/O components, with one or more modules of each type. These components are interconnected in some fashion to achieve the main function of the computer • Processor: Controls the operation of the computer and performs its data processing functions. When there is only one processor, it is often referred to as the central processing unit (CPU) • Main memory: Stores data and programs. This memory is typically volatile; that is, when the computer is shut down, the contents of the memory are lost. • In contrast, the contents of disk memory are retained even when the computer system is shut down. Main memory is also referred to as real memory or primary memory.
  • 4.
    Computer System Overview •I/O modules: Move data between the computer and its external environment. The external environment consists of a variety of devices, including secondary memory devices (e.g., disks), communications equipment, and terminals. • System bus: Provides for communication among processors, main memory, and I/O modules.
  • 5.
  • 6.
    INSTRUCTION EXECUTION • Instructionprocessing consists of two steps: The processor reads ( fetches ) instructions from memory one at a time and executes each instruction. • Instruction execution may involve several operations and depends on the nature of the instruction. • Processor-memory: Data may be transferred from processor to memory or from memory to processor. • Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module. • Data processing: The processor may perform some arithmetic or logic operation on data. • Control: An instruction may specify that the sequence of execution be altered.
  • 7.
  • 10.
    INTERRUPTS • Virtually allcomputers provide a mechanism by which other modules (I/O, memory) may interrupt the normal sequencing of the processor. • Suppose that the processor is transferring data to a printer using the instruction cycle, After each write operation, the processor must pause and remain idle until the printer catches up. • The length of this pause may be on the order of many thousands or even millions of instruction cycles. Clearly, this is a very wasteful use of the processor.
  • 11.
  • 13.
  • 14.
    Multiple Interrupts • oneor more interrupts can occur while an interrupt is being processed. For example, a program may be receiving data from a communications line and printing results at the same time. • The printer will generate an interrupt every time that it completes a print operation. The communication line controller will generate an interrupt every time a unit of data arrives
  • 15.
  • 17.
  • 19.
    Memory Hierarchy • Thedesign constraints on a computer’s memory can be summed up by three questions: How much? How fast? How expensive? • As might be expected, there is a trade-off among the three key characteristics of memory: namely, capacity, access time, and cost. • Faster access time, greater cost per bit • Greater capacity, smaller cost per bit • Greater capacity, slower access speed
  • 20.
    CACHE MEMORY • Allinstruction cycles, the processor accesses memory at least once, to fetch the instruction, and often one or more additional times, to fetch operands and/ or store results. • The principle of locality by providing a small, fast memory between the processor and main memory, namely the cache.
  • 22.
    CACHE MEMORY • Thecache contains a copy of a portion of main memory. When the processor attempts to read a byte or word of memory, a check is made to determine if the byte or word is in the cache. If so, the byte or word is delivered to the processor. If not, a block of main memory, consisting of some fixed number of bytes, is read into the cache and then the byte or word is delivered to the processor.
  • 23.
    CACHE MEMORY • Cachesize . It turns out that reasonably small caches can have a significant impact on performance • Block size : the unit of data exchanged between cache and main memory. • Mapping function determines which cache location the block will occupy • Replacement algorithm chooses, within the constraints of the mapping function, which block to replace when a new block is to be loaded into the cache and the cache already has all slots filled with other blocks. • The write policy dictates when the memory write operation takes place. At one extreme, the writing can occur every time that the block is updated. At the other extreme, the writing occurs only when the block is replaced.
  • 24.
    DIRECT MEMORY ACCESS •(DMA):The DMA function can be performed by a separate module on the system bus or it can be incorporated into an I/O module. • In either case, the technique works as follows. When the processor wishes to read or write a block of data, it issues a command to the DMA module, by sending to the DMA module the following information. – Whether a read or write is requested – The address of the I/O device involved – The starting location in memory to read data from or write data to – The number of words to be read or written • The processor then continues with other work. It has delegated this I/O operation to the DMA module, and that module will take care of it. The DMA module transfers the entire block of data, one word at a time, directly to or from memory without going through the processor.
  • 27.
    • Types ofDirect Memory Access (DMA) • There are four popular types of DMA. • Single-Ended DMA • Dual-Ended DMA • Arbitrated-Ended DMA • Interleaved DMA
  • 28.
    • Single-Ended DMA:Single-Ended DMA Controllers operate by reading and writing from a single memory address. They are the simplest DMA. • Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory addresses. Dual-ended DMA is more advanced than single-ended DMA. • Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to several memory addresses. It is more advanced than Dual-Ended DMA. • Interleaved DMA: Interleaved DMA are those DMA that read from one memory address and write from another memory address.
  • 29.
    • Advantages ofDMA Controller • Data Memory Access speeds up memory operations and data transfer. • CPU is not involved while transferring data. • DMA requires very few clock cycles while transferring data. • DMA distributes workload very appropriately. • DMA helps the CPU in decreasing its load.
  • 30.
    • Disadvantages ofDMA Controller • Direct Memory Access is a costly operation because of additional operations. • DMA suffers from Cache-Coherence Problems. • DMA Controller increases the overall cost of the system. • DMA Controller increases the complexity of the software.
  • 31.
    MULTIPROCESSOR AND MULTICORE ORGANIZATION •Symmetric Multiprocessors • There are two or more similar processors of comparable capability. • These processors share the same main memory and I/O facilities and are interconnected by a bus or other internal connection scheme, such that memory access time is approximately the same for each processor. • All processors share access to I/O devices, either through the same channels or through different channels that provide paths to the same device. • All processors can perform the same functions(hence the term symmetric). • The system is controlled by an integrated operating system that provides interaction between processors and their programs at the job, task, file, and data element levels.
  • 32.
    MULTIPROCESSOR AND MULTICORE ORGANIZATION •An SMP organization has a number of potential advantages over a uniprocessor organization • Performance: If the work to be done by a computer can be organized so that some portions of the work can be done in parallel, then a system with multiple processors will yield greater performance than one with a single processor • Availability: In a symmetric multiprocessor, because all processors can perform the same functions, the failure of a single processor does not halt the machine. Instead, the system can continue to function at reduced performance • Incremental growth: A user can enhance the performance of a system by adding an additional processor. • Scaling: Vendors can offer a range of products with different price and performance characteristics based on the number of processors configured in the system.
  • 33.
  • 34.
    Multicore Computers • Amulticore computer, also known as a chip multiprocessor , combines two or more processors (called cores) on a single piece of silicon (called a die). • Typically, each core consists of all of the components of an independent processor, such as registers, ALU, pipeline hardware, and control unit, plus L1 instruction and data caches. • In addition to the multiple cores, contemporary multicore chips also include L2 cache and, in some cases, L3 cache.
  • 37.
  • 38.
    Operating System Objectivesand Functions • Convenience: An OS makes a computer more convenient to use. • Efficiency: An OS allows the computer system resources to be used in an efficient manner. • Ability to evolve: An OS should be constructed in such a way as to permit the effective development, testing, and introduction of new system functions without interfering with service • If one were to develop an application program as a set of machine instructions that is completely responsible for controlling the computer hardware. • Some of these programs are referred to as utilities, or library programs. These implement frequently used functions that assist in program creation, the management of files, and the control of I/O devices
  • 39.
  • 40.
    Operating System Objectivesand Functions • OS typically provides services in the following areas • Program development: The OS provides a variety of facilities and services, such as editors and debuggers, to assist the programmer in creating programs. • Program execution: A number of steps need to be performed to execute a program. Instructions and data must be loaded into main memory, I/O devices and files must be initialized, and other resources must be prepared. The OS handles these scheduling duties for the user. • Access to I/O devices: Each I/O device requires its own peculiar set of instructions or control signals for operation. The OS provides a uniform interface that hides these details so that programmers can access such devices using simple reads and writes.
  • 41.
    Operating System Objectivesand Functions • Controlled access to files: For file access, the OS must reflect a detailed understanding of not only the nature of the I/O device (disk drive, tape drive) but also the structure of the data contained in the files on the storage medium. • System access: For shared or public systems, the OS controls access to the system as a whole and to specific system resources. The access function must provide protection of resources and data from unauthorized users and must resolve conflicts for resource contention. • Error detection and response: A variety of errors can occur while a computer system is running. These include internal and external hardware errors, such as a memory error, or a device failure or malfunction; and various software errors, such as division by zero, attempt to access forbidden memory location • Accounting: A good OS will collect usage statistics for various resources and monitor performance parameters such as response time. • Instruction set architecture (ISA) :This interface is the boundary between hardware and software. Note that both application programs and utilities may access the ISA directly.
  • 42.
    Operating System Objectivesand Functions • Application binary interface (ABI): The ABI defines the system call interface to the operating system and the hardware resources and services available in a system through the user ISA. • Application programming interface (API) : The API gives a program access to the hardware resources and services available in system through the user
  • 43.
    THE EVOLUTION OFOPERATING SYSTEMS • Serial Processing:With the earliest computers, from the late 1940s to the mid-1950s, the programmer interacted directly with the computer hardware; there was no OS. • These early systems presented two main problems: • Scheduling: Most installations used a hardcopy sign-up sheet to reserve computer time. Typically, a user could sign up for a block of time in multiples of a half hour or so. A user might sign up for an hour and finish in 45 minutes; this would result in wasted computer processing time. On the other hand, the user might run into problems, not finish in the allotted time, and be forced to stop before resolving the problem. • Setup time: A single program, called a job , could involve loading the compiler plus the high-level language program (source program) into memory, saving the compiled program (object program) and then loading and linking together the object program and common functions. Each of these steps could involve mounting or dismounting tapes or setting up card decks. If an error occurred, the hapless user typically had to go back to the beginning of the setup sequence.
  • 44.
    Multi-programmed Batch Systems •we have a single program, referred to as uniprogramming. The processor spends a certain amount of time executing, until it reaches an I/O instruction. It must then wait until that I/O instruction concludes before proceeding. • When one job needs to wait for I/O, the processor can switch to the other job, which is likely not waiting for I/O . Furthermore, we might expand memory to hold three, four, or more programs and switch among all of them. The approach is known as multiprogramming , or multitasking .
  • 45.
  • 46.
    Time-Sharing Systems • Justas multiprogramming allows the processor to handle multiple batch jobs at a time multiprogramming can also be used to handle multiple interactive jobs. • In this latter case, the technique is referred to as time sharing , because processor time is shared among multiple users. • In a time-sharing system, multiple users simultaneously access the system through terminals, • If there are n users actively requesting service at one time, each user will only see on the average 1/ n of the effective computer capacity • A system clock generated interrupts at a rate of approximately one every 0.2 seconds. At each clock interrupt, the OS regained control and could assign the processor to another user. This technique is known as time slicing
  • 47.
    THE EVOLUTION OFOPERATING SYSTEMS • MAJOR ACHIEVEMENTS: Process related • Improper synchronization: It is often the case that a routine must be suspended awaiting an event elsewhere in the system. • Failed mutual exclusion: It is often the case that more than one user or program will attempt to make use of a shared resource at the same time. • Nondeterminate program operation: The results of a particular program normally should depend only on the input to that program and not on the activities of other programs in a shared system. • Deadlocks: It is possible for two or more programs to be hung up waiting for each other. For example, two programs may each require two I/O devices to perform some operation
  • 48.
    THE EVOLUTION OFOPERATING SYSTEMS • MAJOR ACHIEVEMENTS: - Memory Management • Process isolation: The OS must prevent independent processes from interfering with each other’s memory, both data and instructions. • Automatic allocation and management: Programs should be dynamically allocated across the memory hierarchy as required. Allocation should be transparent to the programmer. • Support of modular programming: Programmers should be able to define program modules, and to create, destroy, and alter the size of modules dynamically. • Protection and access control: Sharing of memory, at any level of the memory hierarchy, creates the potential for one program to address the memory space of another. • Long-term storage: Many application programs require means for storing information for extended periods of time, after the computer has been powered down.
  • 49.
    THE EVOLUTION OFOPERATING SYSTEMS • Information Protection and Security: • Availability: Concerned with protecting the system against interruption. • Confidentiality: Assures that users cannot read data for which access is unauthorized. • Data integrity: Protection of data from unauthorized modification. • Authenticity: Concerned with the proper verification of the identity of users and the validity of messages or data.
  • 50.
    DEVELOPMENTS LEADING TOMODERN OPERATING SYSTEMS Microkernel architecture assigns only a few essential functions to the kernel, including address spaces, inter- process communication (IPC), and basic scheduling. Other OS services are provided by kernel modules • Multithreading: • Thread: A thread executes sequentially and is interruptible so that the processor can turn to another thread. • Process: A collection of one or more threads and associated system resources
  • 51.
    DEVELOPMENTS LEADING TOMODERN OPERATING SYSTEMS • Symmetric multiprocessing (SMP): • Performance • Availability • Incremental growth • Scaling
  • 52.
    MICROSOFT WINDOWS OVERVIEW •Windows Objects: • Encapsulation: An object consists of one or more items of data, called attributes, and one or more procedures that may be performed on those data, called services. • Object class and instance: An object class is a template that lists the attributes and services of an object and defines certain object characteristics. • Inheritance: Although the implementation is hand coded, the Executive uses inheritance to extend object classes by adding new features. • Polymorphism: Internally, Windows uses a common set of API functions to manipulate objects of any type; this is a feature of polymorphism
  • 53.
    New in Windows •Engineering improvements: The performance of hundreds of key scenarios, such as opening a file from the GUI, are tracked and continuously characterized to identify and fix problems. • Performance improvements: The amount of memory required has been reduced, both for clients and servers. • Reliability improvements: The user-mode heap is more tolerant of memory allocation errors by programmers, such as continuing to use memory after it is freed • Energy efficiency: Many improvements have been made to the energy efficiency of Windows. On servers, unused processors can be “parked,” reducing their energy use. • Security: The security features in Windows keeps updating in every versions
  • 54.
    LINUX • Linux startedout as a UNIX variant for the IBM PC (Intel 80386) architecture. • Key to the success of Linux has been the availability of free software packages under the auspices of the Free Software Foundation (FSF). FSF’s goal is stable, platform-independent software that is free, high quality, and embraced by the user community. • Dynamic linking: A kernel module can be loaded and linked into the kernel while the kernel is already in memory and executing. A module can also be unlinked and removed from memory at any time. • Stackable modules: The modules are arranged in a hierarchy. Individual modules serve as libraries when they are referenced by client modules higher up in the hierarchy, and as clients when they reference modules further down
  • 55.
    Operating-System Structure • Anoperating system provides an environment for the execution of programs. It provides certain services to programs and to the users of those programs.
  • 56.
    System Calls • Systemcalls provide an interface to the services made available by an operating system. Frequently, systems execute thousands of calls per second. • The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect. • Three of the most common APIs available to application programmers are the Windows API for Windows systems, the POSIX API for POSIX-based systems (which include virtually all versions of UNIX, Linux, and Mac OSX), and the Java API for programs that run on the Java virtual machine.
  • 57.
    Types of SystemCalls • System calls can be grouped roughly into six major categories: process control, file manipulation, device manipulation, information maintenance, communications, and protection.
  • 58.
  • 59.
    System Programs • Systemprograms, also known as system utilities, provide a convenient environment for program development and execution. • File management. These programs create, delete, copy, rename, print, dump, list, and generally manipulate files and directories. • Status information. Some programs simply ask the system for the date, time, amount of available memory or disk space, number of users, or similar status information. • File modification. Several text editors may be available to create and modify the content of files stored on disk or other storage devices. There may also be special commands to search contents of files or perform transformations of the text. • Programming-language support. Compilers, assemblers, debuggers, and interpreters for common programming languages (such as C, C++, Java, and PERL) are often provided with the operating system or available as a separate download.
  • 60.
    System Programs • Programloading and execution. Once a program is assembled or compiled, it must be loaded into memory to be executed. The system may provide absolute loaders, relocatable loaders, linkage editors, and overlay loaders. • Communications. These programs provide the mechanism for creating virtual connections among processes, users, and computer systems. They allow users to send messages to one another’s screens, to browse Web pages, to send e-mail messages, to log in remotely, or to transfer files from one machine to another. • Background services. All general-purpose systems have methods for launching certain system-program processes at boot time. Some of these processes terminate after completing their tasks, while others continue to run until the system is halted. Constantly running system-program processes are known as services, subsystems, or daemons.
  • 61.
    • Cold/Hard Booting:When the computer starts for the first time or is in a shut-down state and switch on the power button to start the system, this type of process to start the computer is called cold booting. During cold booting, the system will read all the instructions from the ROM (BIOS) and the Operating System will be automatically get loaded into the system. This booting takes more time than Hot or Warm Booting. • Warm/Soft Booting: Warm or Hot Booting process is when computer systems come to no response or hang state, and then the system is allowed to restart during on condition. It is also referred to as rebooting. There are many reasons for this state, and the only solution is to reboot the computer.
  • 62.
    System Booting process 1.Startup: Turning the computer on 2. BIOS: The BIOS performs a power-on self-test (POST) of the hardware components 3. Boot loader: The boot loader loads the kernel, which is the core code of the operating system 4. Operating system: The operating system is loaded into the main memory 5. System configuration: Drivers and settings are loaded 6. System utilities: System utilities are loaded 7. User authentication: The user authenticates before being able to use the computer
  • 63.
    System Boot • Theprocedure of starting a computer by loading the kernel is known as booting the system. On most computer systems, a small piece of code known as the bootstrap program or bootstrap loader locates the kernel, loads it into main memory, and starts its execution. • This program is in the form of read-only memory (ROM), because the RAM is in an unknown state at system startup. ROM is convenient because it needs no initialization and cannot easily be infected by a computer virus. • A problem with this approach is that changing the bootstrap code requires changing the ROM hardware chips. Some systems resolve this problem by using erasable programmable read-only memory (EPROM), which is read only except when explicitly given a command to become writable. • All of the disk-bound bootstrap, and the operating system itself, can be easily changed by writing new versions to disk. A disk that has a boot partition is called a boot disk or system disk.
  • 65.