Prepared by: ============== Dedication I dedicate all my efforts to my reader who gives me an urge and inspiration to work more. Muhammad Sharif Author
Database Systems Handbook BY: MUHAMMAD SHARIF 2 CHAPTER 1 INTRODUCTION TO DATABASE AND DATABASE MANAGEMENT SYSTEM CHAPTER 2 DATA TYPES, DATABASE KEYS, SQL FUNCTIONS AND OPERATORS CHAPTER 3 DATA MODELS AND MAPPING TECHNIQUES CHAPTER 4 DISCOVERING BUSINESS RULES AND DATABASE CONSTRAINTS CHAPTER 5 DATABASE DESIGN STEPS AND IMPLEMENTATIONS CHAPTER 6 DATABASE NORMALIZATION AND DATABASE JOINS CHAPTER 7 FUNCTIONAL DEPENDENCIES IN THE DATABASE MANAGEMENT SYSTEM CHAPTER 8 DATABASE TRANSACTION, SCHEDULES, AND DEADLOCKS CHAPTER 9 RELATIONAL ALGEBRA AND QUERY PROCESSING CHAPTER 10 FILE STRUCTURES, INDEXING, AND HASHING CHAPTER 11 DATABASE USERS AND DATABASE SECURITY MANAGEMENT CHAPTER 12 BUSINESS INTELLIGENCE TERMINOLOGIES IN DATABASE SYSTEMS CHAPTER 13 DBMS INTEGRATION WITH BPMS CHAPTER 14 RAID STRUCTURE AND MEMORY MANAGEMENT CHAPTER 15 ORACLE DATABASE FUNDAMENTAL AND ITS ADMINISTRATION CHAPTER 16 DATABASE BACKUPS AND RECOVERY, LOGS MANAGEMENT
Database Systems Handbook BY: MUHAMMAD SHARIF 3 CHAPTER 17 PREREQUISITES OF STORAGE MANAGEMENT AND ORACLE INSTALLATION CHAPTER 18 ORACLE DATABASE APPLICATIONS DEVELOPMENT USING ORACLE APPLICATION EXPRESS CHAPTER 19 ORACLE WEBLOGIC SERVERS AND ITS CONFIGURATIONS ============================================= Acknowledgments We are grateful to numerous individuals who contributed to the preparation of relational database systems and management, 2nd edition. First, we wish to thank our reviewers for their detailed suggestions and insights, characteristic of their thoughtful teaching style. All glories praises and gratitude to Almighty Allah, who blessed us with a super and unequaled Professor as ‘Brain’.
Database Systems Handbook BY: MUHAMMAD SHARIF 4 CHAPTER 1 INTRODUCTION TO DATABASE AND DATABASE MANAGEMENT SYSTEM What is Data? Data – The World’s Most Valuable Resource. Data are the raw bits and pieces of information with no context. If I told you, “15, 23, 14, 85,” you would not have learned anything. But I would have given you data. Data are facts that can be recorded, having explicit meaning. Classifcation of Data We can classify data as structured, unstructured, or semi-structured data. 1. Structured data is generally quantitative data, it usually consists of hard numbers or things that can be counted. 2. Unstructured data is generally categorized as qualitative data, and cannot be analyzed and processed using conventional tools and methods. 3. Semi-structured data refers to data that is not captured or formatted in conventional ways. Semi- structured data does not follow the format of a tabular data model or relational databases because it does not have a fixed schema. XML, JSON are semi-structured example. Properties Structured data is generally stored in data warehouses. Unstructured data is stored in data lakes. Structured data requires less storage space while Unstructured data requires more storage space. Examples: Structured data (Table, tabular format, or Excel spreadsheets.csv) Unstructured data (Email and Volume, weather data) Semi-structured data (Webpages, Resume documents, XML)
Database Systems Handbook BY: MUHAMMAD SHARIF 5 Categories of Data
Database Systems Handbook BY: MUHAMMAD SHARIF 6 Implicit data is information that is not provided intentionally but gathered from available data streams, either directly or through analysis of explicit data. Explicit data is information that is provided intentionally, for example through surveys and membership registration forms. Explicit data is data that is provided intentionally and taken at face value rather than analyzed or interpreted. Data hacking Method A data breach is a cyber attack in which sensitive, confidential or otherwise protected data has been accessed or disclosed. What is a data item? The basic component of a file in a file system is a data item. What are records? A group of related data items treated as a single unit by an application is called a record. What is the file? A file is a collection of records of a single type. A simple file processing system refers to the first computer-based approach to handling commercial or business applications. Mapping from file system to Relational Database In a relational database, a data item is called a column or attribute; a record is called a row or tuple, and a file is called a table. Major challenges from file system to database movements 1. Data validatin 2. Data integrity 3. Data security 4. Data sharing Details will be written later where needed.
Database Systems Handbook BY: MUHAMMAD SHARIF 7 What is information? When we organized data that has some meaning, we called information. What is the database?
Database Systems Handbook BY: MUHAMMAD SHARIF 8 What is Database Application? A database application is a program or group of programs that are used for performing certain operations on the data stored in the database. These operations may contain insertion of data into a database or extracting some data from the database based on a certain condition, updating data in the database. Examples: (GIS/GPS). What is Knowledge? Knowledge = information + application What is Meta Data? The database definition or descriptive information is also stored by the DBMS in the form of a database catalog or dictionary, it is called meta-data. Data that describe the properties or characteristics of end-user data and the context of those data. Information about the structure of the database. Example Metadata for Relation Class Roster catalogs (Attr_Cat(attr_name, rel_name, type, position like 1,2,3, access rights on objects, what is the position of attribute in the relation). Simple definition is data about data. What is Shared Collection? The logical relationship between data. Data inter-linked between data is called a shared collection. It means data is in the repository and we can access it. What is Database Management System (DBMS)? A database management system (DBMS) is a software package or programs designed to define, retrieve, Control, manipulate data, and manage data in a database. What are database systems? A shared collection of logically related data (comprises entities, attributes, and relationships), is designed to meet the information needs of the organization. The database and DBMS software together is called a database system. Components of a Database Environment 1. Hardware (Server), 2. Software (DBMS), 3. Data and Meta-Data, 4. Procedure (Govern the design of database) 5. Resources (Who Administer database) History of Databases From 1970 to 1972, E.F. Codd published a paper proposed using a relational database model. RDBMS is originally based on E.F. Codd's relational model invention. Before DBMS, there was a file-based system in the era the 1950s.
Database Systems Handbook BY: MUHAMMAD SHARIF 9 Evolution of Database Systems  Flat files - 1960s - 1980s  Hierarchical – 1970s - 1990s  Network – 1970s - 1990s  Relational – 1980s - present  Object-oriented – 1990s - present  Object-relational – 1990s - present  Data warehousing – 1980s - present  Web-enabled – 1990s – present Here, are the important landmarks from evalution of database systems  1960 – Charles Bachman designed the first DBMS system  1970 – Codd introduced IBM’S Information Management System (IMS)  1976- Peter Chen coined and defined the Entity-relationship model also known as the ER model  1980 – Relational Model becomes a widely accepted database component  1985- Object-oriented DBMS develops.  1990- Incorporation of object-orientation in relational DBMS.  1991- Microsoft MS access, a personal DBMS and that displaces all other personal DBMS products.  1995: First Internet database applications  1997: XML applied to database processing. Many vendors begin to integrate XML into DBMS products. The ANSI-SPARC Database systems Architecture levels 1. The Internal Level (Physical Representation of Data) 2. The Conceptual Level (Holistic Representation of Data) 3. The External Level (User Representation of Data) Internal level store data physically. The conceptual level tells you how the database was structured logically. External level gives you different data views. This is the uppermost level in the database. Database architecture tiers Database architecture has 4 types of tiers. Single tier architecture (for local applications direct communication with database server/disk. It is also called physical centralized architecture.
Database Systems Handbook BY: MUHAMMAD SHARIF 10 2-tier architecture (basic client-server APIs like ODBC, JDBC, and ORDS are used), Client and disk are connected by APIs called network. 3-tier architecture (Used for web applications, it uses a web server to connect with a database server).
Database Systems Handbook BY: MUHAMMAD SHARIF 11 Advantages of ANSI-SPARC Architecture The ANSI-SPARC standard architecture is three-tiered, but some books refer 4 tiers. These 4-tiered representation offers several advantages, which are as follows: Its main objective of it is to provide data abstraction. Same data can be accessed by different users with different customized views. The user is not concerned about the physical data storage details. Physical storage structure can be changed without requiring changes in the internal structure of the database as well as users view. The conceptual structure of the database can be changed without affecting end users. It makes the database abstract. It hides the details of how the data is stored physically in an electronic system, which makes it easier to understand and easier to use for an average user. It also allows the user to concentrate on the data rather than worrying about how it should be stored. Types of databases There are various types of databases used for storing different varieties of data in their respective DBMS data model environment. Each database has data models except NoSQL. One is Enterprise Database Management System that is not included in this figure. I will write details one by one in where appropriate. Sequence of details is not necessary.
Database Systems Handbook BY: MUHAMMAD SHARIF 12 Parallel database architectures Parallel Database architectures are: 1. Shared-memory 2. Shared-disk 3. Shared-nothing (the most common one) 4. Shared Everything Architecture 5. Hybrid System 6. Non-Uniform Memory Architecture A hierarchical model system is a hybrid of the shared memory system, a shared disk system, and a shared-nothing system. The hierarchical model is also known as Non-Uniform Memory Architecture (NUMA). NUMA uses local and remote memory (Memory from another group); hence it will take a longer time to communicate with each other. In NUMA, were different memory controller is used. S.NO UMA NUMA 1 There are 3 types of buses used in uniform Memory Access which are: Single, Multiple and Crossbar. While in non-uniform Memory Access, There are 2 types of buses used which are: Tree and hierarchical. Advantages of NUMA Improves the scalability of the system. Memory bottleneck (shortage of memory) problem is minimized in this architecture. NUMA machines provide a linear address space, allowing all processors to directly address all memory.
Database Systems Handbook BY: MUHAMMAD SHARIF 13 Distributed Databases Distributed database system (DDBS) = Database Systems + Communication A set of databases in a distributed system that can appear to applications as a single data source. A distributed DBMS (DDBMS) can have the actual database and DBMS software distributed over many sites, connected by a computer network. Distributed DBMS architectures Three alternative approaches are used to separate functionality across different DBMS-related processes. These alternative distributed architectures are called 1. Client-server, 2. Collaborating server or multi-Server 3. Middleware or Peer-to-Peer  Client-server: Client can send query to server to execute. There may be multiple server process. The two different client-server architecture models are: 1. Single Server Multiple Client 2. Multiple Server Multiple Client Client Server architecture layers 1. Presentation layer 2. Logic layer 3. Data layer Presentation layer The basic work of this layer provides a user interface. The interface is a graphical user interface. The graphical user interface is an interface that consists of menus, buttons, icons, etc. The presentation tier presents information related to such work as browsing, sales purchasing, and shopping cart contents. It attaches with other tiers by computing results to the browser/client tier and all other tiers in the network. Its other name is external layer. Logic layer The logical tier is also known as the data access tier and middle tier. It lies between the presentation tier and the data tier. it controls the application’s functions by performing processing. The components that build this layer exist on the server and assist the resource sharing these components also define the business rules like different government legal rules, data rules, and different business algorithms which are designed to keep data structure consistent. This is also known as conceptual layer. Data layer The 3-Data layer is the physical database tier where data is stored or manipulated. It is internal layer of database management system where data stored.  Collaborative/Multi server: This is an integrated database system formed by a collection of two or more autonomous database systems. Multi-DBMS can be expressed through six levels of schema: 1. Multi-database View Level − Depicts multiple user views comprising subsets of the integrated distributed database. 2. Multi-database Conceptual Level − Depicts integrated multi-database that comprises global logical multi- database structure definitions. 3. Multi-database Internal Level − Depicts the data distribution across different sites and multi-database to local data mapping. 4. Local database View Level − Depicts a public view of local data. 5. Local database Conceptual Level − Depicts local data organization at each site. 6. Local database Internal Level − Depicts physical data organization at each site.
Database Systems Handbook BY: MUHAMMAD SHARIF 14 There are two design alternatives for multi-DBMS − 1. A model with a multi-database conceptual level. 2. Model without multi-database conceptual level.  Peer-to-Peer: Architecture model for DDBMS, In these systems, each peer acts both as a client and a server for imparting database services. The peers share their resources with other peers and coordinate their activities. Its scalability and flexibility is growing and shrinking. All nodes have the same role and functionality. Harder to manage because all machines are autonomous and loosely coupled. This architecture generally has four levels of schemas: 1. Global Conceptual Schema − Depicts the global logical view of data. 2. Local Conceptual Schema − Depicts logical data organization at each site. 3. Local Internal Schema − Depicts physical data organization at each site. 4. Local External Schema − Depicts user view of data Example of Peer-to-peer architecture Types of homogeneous distributed database Autonomous − Each database is independent and functions on its own. They are integrated by a controlling application and use message passing to share data updates.
Database Systems Handbook BY: MUHAMMAD SHARIF 15 Non-autonomous − Data is distributed across the homogeneous nodes and a central or master DBMS coordinates data updates across the sites. Autonomous databases 1. Autonomous Transaction Processing - Serverless 2. Autonomous Transaction Processing - Dedicated 3. Autonomous data warehourse processing - Analytics Serverless is a simple and elastic deployment choice. Oracle autonomously operates all aspects of the database lifecycle from database placement to backup and updates. Dedicated is a private cloud in public cloud deployment choice. A completely dedicated compute, storage, network, and database service for only a single tenant. Autonomous transaction processing: Architecture Heterogeneous Distributed Databases (Dissimilar schema for each site database, it can be any variety of dbms, relational, network, hierarchical, object oriented) Types of Heterogeneous Distributed Databases 1. Federated − The heterogeneous database systems are independent and integrated so that they function as a single database system. 2. Un-federated − The database systems employ a central coordinating module In a heterogeneous distributed database, different sites have different operating systems, DBMS products, and data models. Parameters at which distributed DBMS architectures developed DDBMS architectures are generally developed depending on three parameters: 1. Distribution − It states the physical distribution of data across the different sites. 2. Autonomy − It indicates the distribution of control of the database system and the degree to which each constituent DBMS can operate independently. 3. Heterogeneity − It refers to the uniformity or dissimilarity of the data models, system components, and databases.
Database Systems Handbook BY: MUHAMMAD SHARIF 16 Note: The Semi Join and Bloom Join are two techniques/data fetching method in distributed databases. Some Popular databases and respective data models  Native XML Databases We were not surprised that the number of start-up companies as well as some established data management companies determined that XML data would be best managed by a DBMS that was designed specifically to deal with semi-structured data — that is, a native XML database.  Conceptual Database This step is related to the modeling in the Entity-Relationship (E/R) Model to specify sets of data called entities, relations among them called relationships and cardinality restrictions identified by letters N and M, in this case, the many-many relationships stand out.  Conventional Database This step includes Relational Modeling where a mapping from MER to relations using rules of mapping is carried out. The posterior implementation is done in Structured Query Language (SQL).  Non-Conventional database This step involves Object-Relational Modeling which is done by the specification in Structured Query Language. In this case, the modeling is related to the objects and their relationships with the Relational Model.  Traditional database  Temporal database  Conventional Databases  NewSQL Database  Autonomous database  Cloud database  Spatiotemporal  Enterprise Database Management System  Google Cloud Firestore  Couchbase  Memcached, Coherence (key-value store)  HBase, Big Table, Accumulo (Tabular)  MongoDB, CouchDB, Cloudant, JSON-like (Document-based)  Neo4j (Graph Database)  Redis (Data model: Key value)
Database Systems Handbook BY: MUHAMMAD SHARIF 17  Elasticsearch (Data model: search engine)  Microsoft access (Data model: relational)  Cassandra (Data model: Wide column)  MariaDB (Data model: Relational)  Splunk (Data model: search engine)  Snowflake (Data model: Relational)  Azure SQL Server Database (Relational)  Amazon DynamoDB (Data model: Multi-Model)  Hive (Data model: Relational) Non-relational (NoSQL) Data model BASE Model: Basically Available – Rather than enforcing immediate consistency, BASE-modelled NoSQL databases will ensure the availability of data by spreading and replicating it across the nodes of the database cluster. Soft State – Due to the lack of immediate consistency, data values may change over time. The BASE model breaks off with the concept of a database that enforces its consistency, delegating that responsibility to developers. Eventually Consistent – The fact that BASE does not enforce immediate consistency does not mean that it never achieves it. However, until it does, data reads are still possible (even though they might not reflect the reality). Just as SQL databases are almost uniformly ACID compliant, NoSQL databases tend to conform to BASE principles. NewSQL Database NewSQL is a class of relational database management systems that seek to provide the scalability of NoSQL systems for online transaction processing (OLTP) workloads while maintaining the ACID guarantees of a traditional database system. Examples and properties of Relational Non-Relational Database: The term NewSQL categorizes databases that are the combination of relational models with the advancement in scalability, and flexibility with types of data. These databases focus on the features which are not present in NoSQL, which offers a strong consistency guarantee. This covers two layers of data one relational one and a key-value store.
Database Systems Handbook BY: MUHAMMAD SHARIF 18 Sr. No NoSQL NewSQL 1. NoSQL is schema-less or has no fixed schema/unstructured schema. So BASE Data model exists in NoSQL. NoSQL is a schema-free database. NewSQL is schema-fixed as well as a schema- free database. 2. It is horizontally scalable. It is horizontally scalable. 3. It possesses automatically high availability. It possesses built-in high availability. 4. It supports cloud, on-disk, and cache storage. It fully supports cloud, on-disk, and cache storage. It may cause a problem with in-memory architecture for exceeding volumes of data. 5. It promotes CAP properties. It promotes ACID properties. 6. Online Transactional Processing is not supported. Online Transactional Processing and implementation to traditional relational databases are fully supported 7. There are low-security concerns. There are moderate security concerns. 8. Use Cases: Big Data, Social Network Applications, and IoT. Use Cases: E-Commerce, Telecom industry, and Gaming. 9. Examples: DynamoDB, MongoDB, RaveenDB etc. Examples: VoltDB, CockroachDB, NuoDB etc. Advantages of Database management systems: It supports a logical view (schema, subschema), It supports a physical view (access methods, data clustering), It supports data definition language, data manipulation language to manipulate data, It provides important utilities, such as transaction management and concurrency control, data integrity, crash recovery, and security. Relational database systems, the dominant type of systems for well-formatted business databases, also provide a greater degree of data independence. The motivations for using databases rather than files include greater availability to a diverse set of users, integration of data for easier access to and updating of complex transactions, and less redundancy of data. Data consistency, Better data security
Database Systems Handbook BY: MUHAMMAD SHARIF 19 CHAPTER 2 DATA TYPES, DATABASE KEYS, SQL FUNCTIONS AND OPERATORS Data types Overview BINARY_FLOAT BINARY_DOUBLE 32-bit floating point number. This data type requires 4 bytes. 64-bit floating point number. This data type requires 8 bytes. There are two classes of date and time-related data types in PL/SQL − 1. Datetime datatypes 2. Interval Datatypes The DateTime datatypes are −  Date  Timestamp  Timestamp with time zone  Timestamp with local time zone The interval datatypes are −  Interval year to month  Interval day to second If max_string_size = extended 32767 bytes or characters If max_string_size = standard Number(p,s) data type 4000 bytes or characters Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127. Both precision and scale are in decimal digits. A number value requires from 1 to 22 bytes. Character data types The character data types represent alphanumeric text. PL/SQL uses the SQL character data types such as CHAR, VARCHAR2, LONG, RAW, LONG RAW, ROWID, and UROWID. CHAR(n) is a fixed-length character type whose length is from 1 to 32,767 bytes. VARCHAR2(n) is varying length character data from 1 to 32,767 bytes. Data Type Maximum Size in PL/SQL Maximum Size in SQL CHAR 32,767 bytes 2,000 bytes NCHAR 32,767 bytes 2,000 bytes RAW 32,767 bytes 2,000 bytes VARCHAR2 32,767 bytes 4,000 bytes ( 1 char = 1 byte) NVARCHAR2 32,767 bytes 4,000 bytes LONG 32,760 bytes 2 gigabytes (GB) – 1
Database Systems Handbook BY: MUHAMMAD SHARIF 20 LONG RAW 32,760 bytes 2 GB BLOB 8-128 terabytes (TB) (4 GB - 1) database_block_size CLOB 8-128 TB (Used to store large blocks of character data in the database.) (4 GB - 1) database_block_size NCLOB 8-128 TB ( Used to store large blocks of NCHAR data in the database.) (4 GB - 1) database_block_size Scalar No Fixed range Single values with no internal components, such as a NUMBER, DATE, or BOOLEAN. Numeric values on which arithmetic operations are performed like Number(7,2). Stores dates in the Julian date format. Logical values on which logical operations are performed. NUMBER Data Type No fixed Range DEC, DECIMAL, DOUBLE PRECISION, FLOAT, INTEGER, INT, NUMERIC, REAL, SMALLINT Type Size in Memory Range of Values Byte 1 byte 0 to 255 Boolean 2 bytes True or False Integer 2 bytes –32,768 to 32,767 Long (long integer) 4 bytes –2,147,483,648 to 2,147,483,647 Single (single-precision real) 4 bytes Approximately –3.4E38 to 3.4E38 Double (double-precision real) 8 bytes Approximately –1.8E308 to 4.9E324 Currency (scaled integer) 8 bytes Approximately – 922,337,203,685,477.5808 to 922,337,203,685,477.5807 Date 8 bytes 1/1/100 to 12/31/9999 Object 4 bytes Any Object reference
Database Systems Handbook BY: MUHAMMAD SHARIF 21 String Variable length: 10 bytes + string length; Fixed length: string length Variable length: <= about 2 billion (65,400 for Win 3.1) Fixed length: up to 65,400 Variant 16 bytes for numbers 22 bytes + string length The Concept of Signed and Unsigned Integers
Database Systems Handbook BY: MUHAMMAD SHARIF 22 Organization of bits in a 16-bit signed short integer. Thus, a signed number that stores 16 bits can contain values ranging from –32,768 through 32,767, and one that stores 8 bits can contain values ranging from –128 through 127. Data Types can be further divided as:  Primitive  Non-Primitive Primitive data types are pre-defined whereas non-primitive data types are user-defined. Data types like byte, int, short, float, long, char, bool, etc are called Primitive data types. Non-primitive data types include class, enum, array, delegate, etc. User-Defined Datatypes There are two categories of user-defined datatypes:  Object types  Collection types A user-defined data type (UDT) is a data type that derived from an existing data type. You can use UDTs to extend the built-in types already available and create your own customized data types. There are six user-defined types: 1. Distinct type 2. Structured type 3. Reference type 4. Array type 5. Row type 6. Cursor type Here the data types are in different groups:  Exact Numeric: bit, Tinyint, Smallint, Int, Bigint, Numeric, Decimal, SmallMoney, Money.  Approximate Numeric: float, real  Data and Time: DateTime, Smalldatatime, date, time, Datetimeoffset, Datetime2  Character Strings: char, varchar, text  Unicode Character strings: Nchar, Nvarchar, Ntext  Binary strings: binary, Varbinary, image  Other Data types: sql_variant, timestamp, Uniqueidentifier, XML  CLR data types: hierarchyid  Spatial data types: geometry, geography
Database Systems Handbook BY: MUHAMMAD SHARIF 23 Abstract Data Types in OracleOne of the shortcomings of the Oracle 7 database was the limited number of intrinsic data types. Abstract Data Types An Abstract Data Type (ADT) consists of a data structure and subprograms that manipulate the data. The variables that form the data structure are called attributes. The subprograms that manipulate the attributes are called methods. ADTs are stored in the database and instances of ADTs can be stored in tables and used as PL/SQL variables. ADTs let you reduce complexity by separating a large system into logical components, which you can reuse. In the static data dictionary view. ANSI SQL Datat type convertions with Oracle Data type
Database Systems Handbook BY: MUHAMMAD SHARIF 24 Database Key A key is a field of a table that identifies the tuple in that table.  Super key An attribute or a set of attributes that uniquely identifies a tuple within a relation.  Candidate key A super key such that no proper subset is a super key within the relation. Contains no unique subset (irreducibility). Possibly many candidate keys (specified using UNIQUE), one of which is chosen as the primary key. PRIMARY KEY (sid), UNIQUE (id, grade)) A candidate can be unique but its value can be changed.  Natural key PK in OLTP. It may be a PK in OLAP. A natural key (also known as business key or domain key) is a type of unique key in a database formed of attributes that exist and are used in the external world outside the database like natural key (SSN column)  Composite key or concatenate key A primary key that consists of two or more attributes is known as a composite key.  Primary key The candidate key is selected to identify tuples uniquely within a relation. Should remain constant over the life of the tuple. PK is unique, Not repeated, not null, not change for life. If the primary key is to be changed. We will drop the entity of the table, and add a new entity, In most cases, PK is used as a foreign key. You cannot change the value. You first delete the child, so that you can modify the parent table.  Minimal Super Key All super keys can't be primary keys. The primary key is a minimal super key. KEY is a minimal SUPERKEY, that is, a minimized set of columns that can be used to identify a single row.  Foreign key An attribute or set of attributes within one relation that matches the candidate key of some (possibly the same) relation. Can you add a non-key as a foreign key? Yes, the minimum condition is it should be unique. It should be candidate key.  Composite Key The composite key consists of more than one attribute. COMPOSITE KEY is a combination of two or more columns that uniquely identify rows in a table. The combination of columns guarantees uniqueness, though individually uniqueness is not guaranteed. Hence, they are combined to uniquely identify records in a table. You can you composite key as PK but the Composite key will go to other tables as a foreign key.  Alternate key A relation can have only one primary key. It may contain many fields or a combination of fields that can be used as the primary key. One field or combination of fields is used as the primary key. The fields or combinations of fields that are not used as primary keys are known as candidate keys or alternate keys.
Database Systems Handbook BY: MUHAMMAD SHARIF 25  Sort Or control key A field or combination of fields that are used to physically sequence the stored data is called a sort key. It is also known s the control key.  Alternate key An alternate key is a secondary key it can be simple to understand an example: Let's take an example of a student it can contain NAME, ROLL NO., ID, and CLASS.  Unique key A unique key is a set of one or more than one field/column of a table that uniquely identifies a record in a database table. You can say that it is a little like a primary key but it can accept only one null value and it cannot have duplicate values. The unique key and primary key both provide a guarantee for uniqueness for a column or a set of columns. There is an automatically defined unique key constraint within a primary key constraint. There may be many unique key constraints for one table, but only one PRIMARY KEY constraint for one table.  Artificial Key The key created using arbitrarily assigned data are known as artificial keys. These keys are created when a primary key is large and complex and has no relationship with many other relations. The data values of the artificial keys are usually numbered in a serial order. For example, the primary key, which is composed of Emp_ID, Emp_role, and Proj_ID, is large in employee relations. So it would be better to add a new virtual attribute to identify each tuple in the relation uniquely. Rownum and rowid are artificial keys. It should be a number or integer, numeric. Format of Rowid of :  Surrogate key SURROGATE KEYS is An artificial key that aims to uniquely identify each record and is called a surrogate key. This kind of partial key in DBMS is unique because it is created when you don’t have any natural primary key. You can't insert values of the surrogate key. Its value comes from the system automatically. No business logic in key so no changes based on business requirements Surrogate keys reduce the complexity of the composite key. Surrogate keys integrate the extract, transform, and load in DBs.  Compound Key COMPOUND KEY has two or more attributes that allow you to uniquely recognize a specific record. It is possible that each column may not be unique by itself within the database.
Database Systems Handbook BY: MUHAMMAD SHARIF 26 Database Keys and Its Meta data’s description: Operators < > or != Not equal to like salary <>500.
Database Systems Handbook BY: MUHAMMAD SHARIF 27 Wildcards and Unions Operators LIKE operator is used to filter the result set based on a string pattern. It is always used in the WHERE clause. Wildcards are used in SQL to match a string pattern. A wildcard character is used to substitute one or more characters in a string. Wildcard characters are used with the LIKE operator. There are two wildcards often used in conjunction with the LIKE operator: 1. The percent sign (%) represents zero, one, or multiple characters 2. The underscore sign (_) represents one, a single character Two maindifferences between like, Ilike Operator: 1. LIKE is case-insensitive whereas iLIKE is case-sensitive. 2. LIKE is a standard SQL operator, whereas ILIKE is only implemented in certain databases such as PostgreSQL and Snowflake. To ignore case when you're matching values, you can use the ILIKE command: Example 1: SELECT * FROM tutorial.billboard_top_100_year_en WHERE "group" ILIKE 'snoop%' Example 2: SELECT FROM Customers WHERE City LIKE 'ber%'; SQL UNION clause is used to select distinct values from the tables. SQL UNION ALL clause used to select all values including duplicates from the tables The UNION operator is used to combine the result-set of two or more SELECT statements. Every SELECT statement within UNION must have the same number of columns
Database Systems Handbook BY: MUHAMMAD SHARIF 28 The columns must also have similar data types The columns in every SELECT statement must also be in the same order EXCEPT or MINUS These are the records that exist in Dataset1 and not in Dataset2. Each SELECT statement within the EXCEPT query must have the same number of fields in the result sets with similar data types. The difference is that EXCEPT is available in the PostgreSQL database while MINUS is available in MySQL and Oracle. There is absolutely no difference between the EXCEPT clause and the MINUS clause. IN operator allows you to specify multiple values in a WHERE clause. The IN operator is a shorthand for multiple OR conditions. ANY operator Returns a Boolean value as a result Returns true if any of the subquery values meet the condition . ANY means that the condition will be true if the operation is true for any of the values in the range. NOT IN can also take literal values whereas not existing need a query to compare the results. SELECT CAT_ID FROM CATEGORY_A WHERE CAT_ID NOT IN (SELECT CAT_ID FROM CATEGORY_B) NOT EXISTS SELECT A.CAT_ID FROM CATEGORY_A A WHERE NOT EXISTS (SELECT B.CAT_ID FROM CATEGORY_B B WHERE B.CAT_ID = A.CAT_ID) NOT EXISTS could be good to use because it can join with the outer query & can lead to usage of the index if the criteria use an indexed column. EXISTS AND NOT EXISTS are typically used in conjuntion with a correlated nested query. The result of EXISTS is a boolean value, TRUE if the nested query ressult contains at least one tuple, or FALSE if the nested query result contains no tuples Supporting operators in different DBMS environments: Keyword Database System TOP SQL Server, MS Access LIMIT MySQL, PostgreSQL, SQLite FETCH FIRST Oracle
Database Systems Handbook BY: MUHAMMAD SHARIF 29 But 10g onward TOP Clause no longer supported replace with ROWNUM clause. SQL FUNCTIONS Types of Multiple Row Functions in Oracle (Aggrigate functions) AVG: It retrieves the average value of the number of rows in a table by ignoring the null value COUNT: It retrieves the number of rows (count all selected rows using *, including duplicates and rows with null values) MAX: It retrieves the maximum value of the expression, ignores null values MIN: It retrieves the minimum value of the expression, ignores null values SUM: It retrieves the sum of values of the number of rows in a table, it ignores null values Example:
Database Systems Handbook BY: MUHAMMAD SHARIF 30 Explanation of Single Row Functions
Database Systems Handbook BY: MUHAMMAD SHARIF 31 Examples of date functions
Database Systems Handbook BY: MUHAMMAD SHARIF 32
Database Systems Handbook BY: MUHAMMAD SHARIF 33 CHARTOROWID converts a value from CHAR, VARCHAR2, NCHAR, or NVARCHAR2 datatype to ROWID datatype. This function does not support CLOB data directly. However, CLOBs can be passed in as arguments through implicit data conversion. For assignments, Oracle can automatically convert the following: VARCHAR2 or CHAR to MLSLABEL MLSLABEL to VARCHAR2 VARCHAR2 or CHAR to HEX HEX to VARCHAR2
Database Systems Handbook BY: MUHAMMAD SHARIF 34 Example of Conversion Functions
Database Systems Handbook BY: MUHAMMAD SHARIF 35
Database Systems Handbook BY: MUHAMMAD SHARIF 36
Database Systems Handbook BY: MUHAMMAD SHARIF 37
Database Systems Handbook BY: MUHAMMAD SHARIF 38 Subquery Concept
Database Systems Handbook BY: MUHAMMAD SHARIF 39 END
Database Systems Handbook BY: MUHAMMAD SHARIF 40 CHAPTER 3 DATA MODELS AND MAPPING TECHNIQUES Overview of data modeling in DBMS The semantic data model is a method of structuring data to represent it in a specific logical way. Types of Data Models in history: Data abstraction Process of hiding (suppressing) unnecessary details so that the high-level concept can be made more visible. A data model is a relatively simple representation, usually graphical, of more complex real-world data structures. Data model Schema and Instance
Database Systems Handbook BY: MUHAMMAD SHARIF 41 Database Instance is the data which is stored in the database at a particular moment is called an instance of the database. Also called database state (or occurrence or snapshot). The content of the database, instance is also called an extension. The term instance is also applied to individual database components, E.g., record instance, table instance, entity instance Types of Instances Initial Database Instance: Refers to the database instance that is initially loaded into the system. Valid Database Instance: An instance that satisfies the structure and constraints of the database. The database instance changes every time the database is updated. Database Schema is the overall design or skeleton structure of the database. It represents the logical view, visual diagram having relationals of objects of the entire database. A database schema can be represented by using a visual diagram. That diagram shows the database objects and their relationship with each other. A schema contains schema objects like table, foreign key, primary key, views, columns, data types, stored procedure, etc. A database schema is designed by the database designers to help programmers whose software will interact with the database. The process of database creation is called data modeling. Relational Schema definition Relational schema refers to the meta-data that describes the structure of data within a certain domain . It is the blueprint of a database that outlines the way any database will have some number of constraints that must be applied to ensure correct data (valid states). Database Schema definition A relational schema may also refer to as a database schema. A database schema is the collection of relation schemas for a whole database. A relational or Database schema is a collection of meta-data. Database schema describes the structure and constraints of data represented in a particular domain . A Relational schema can be described as a blueprint of a database that outlines the way data is organized into tables. This blueprint will not contain any type of data. In a relational schema, each tuple is divided into fields called Domain. Other definitions: The overall design of the database.Structure of database, Schema is also called intension. Types of Schemas w.r.t Database DBMS Schemas: Logical/Conceptual/physical schema/external schema Data warehouse/multi-dimensional schemas: Snowflake/star OLAP Schemas: Fact constellation schema/galaxy ANSI-SPARC schema architecture External Level: View level, user level, external schema, Client level. Conceptual Level: Community view, ERD Model, conceptual schema, server level, Conceptual (high-level, semantic) data models, entity-based or object-based data models, what data is stored .and relationships, it’s deal Logical data independence (External/conceptual mapping) logical schema: It is sometimes called conceptual schema too (server level), Implementation (representational) data models. Specific DBMS level modeling. Internal Level: Physical representation, Internal schema, Database level, Low level. It deals with how data is stored in the database and Physical data independence (Conceptual/internal mapping) Physical data level: Physical storage, physical schema, some-time deals with internal schema. It is detailed in administration manuals. Data independence IT is the ability to make changes in either the logical or physical structure of the database without requiring reprogramming of application programs.
Database Systems Handbook BY: MUHAMMAD SHARIF 42 Data Independence types Logical data independence=>Immunity of external schemas to changes in the conceptual schema Physical data independence=>Immunity of the conceptual schema to changes in the internal schema. There are two types of mapping in the database architecture Conceptual/ Internal Mapping The Conceptual/ Internal Mapping lies between the conceptual level and the internal level. Its role is to define the correspondence between the records and fields of the conceptual level and files and data structures of the internal level.
Database Systems Handbook BY: MUHAMMAD SHARIF 43 External/Conceptual Mapping The external/Conceptual Mapping lies between the external level and the Conceptual level. Its role is to define the correspondence between a particular external and conceptual view. Detail description When a schema at a lower level is changed, only the mappings. between this schema and higher-level schemas need to be changed in a DBMS that fully supports data independence. The higher-level schemas themselves are unchanged. Hence, the application programs need not be changed since they refer to the external schemas. For example, the internal schema may be changed when certain file structures are reorganized or new indexes are created to improve database performance. Data abstraction Data abstraction makes complex systems more user-friendly by removing the specifics of the system mechanics. The conceptual data model has been most successful as a tool for communication between the designer and the end user during the requirements analysis and logical design phases. Its success is because the model, using either ER or UML, is easy to understand and convenient to represent. Another reason for its effectiveness is that it is a top- down approach using the concept of abstraction. In addition, abstraction techniques such as generalization provide useful tools for integrating end user views to define a global conceptual schema. These differences show up in conceptual data models as different levels of abstraction; connectivity of relationships (one-to-many, many-to-many, and so on); or as the same concept being modeled as an entity, attribute, or relationship, depending on the user’s perspective. Techniques used for view integration include abstraction, such as generalization and aggregation to create new supertypes or subtypes, or even the introduction of new relationships. The higher-level abstraction, the entity cluster, must maintain the same relationships between entities inside and outside the entity cluster as those that occur between the same entities in the lower-level diagram. ERD, EER terminology is not only used in conceptual data modeling but also in artificial intelligence literature when discussing knowledge representation (KR). The goal of KR techniques is to develop concepts for accurately modeling some domain of knowledge by creating an ontology. Ontology is the fundamental part of Semantic Web. The goal of World Wide Web Consortium (W3C) is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL- DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. Ontology engineering is an important aspect of semantic web vision to attain the meaningful representation of data. Although various techniques exist for the creation of ontology, most of the methods involve
Database Systems Handbook BY: MUHAMMAD SHARIF 44 the number of complex phases, scenario-dependent ontology development, and poor validation of ontology. This research work presents a lightweight approach to build domain ontology using Entity Relationship (ER) model. We now discuss four abstraction concepts that are used in semantic data models, such as the EER model as well as in KR schemes: (1) classification and instantiation, (2) identification, (3) specialization and generalization, and (4) aggregation and association. One ongoing project that is attempting to allow information exchange among computers on the Web is called the Semantic Web, which attempts to create knowledge representation models that are quite general in order to allow meaningful information exchange and search among machines. One commonly used definition of ontology is a specification of a conceptualization. In this definition, a conceptualization is the set of concepts that are used to represent the part of reality or knowledge that is of interest to a community of users. Types of Abstractions Classification: A is a member of class B Aggregation: B, C, D Are Aggregated Into A, A Is Made Of/Composed Of B, C, D, Is-Made-Of, Is- Associated-With, Is-Part-Of, Is-Component-Of. Aggregation is an abstraction through which relationships are treated as higher-level entities. Generalization: B,C,D can be generalized into a, b is-a/is-an a, is-as-like, is-kind-of. Category or Union: A category represents a single superclass or subclass relationship with more than one superclass. Specialization: A can be specialized into B, C, DB, C, or D (special cases of A) Has-a, Has-A, Has An, Has-An approach is used in the specialization Composition: IS-MADE-OF (like aggregation) Identification: IS-IDENTIFIED-BY UML Diagrams Notations UML stands for Unified Modeling Language. ERD stands for Entity Relationship Diagram. UML is a popular and standardized modeling language that is primarily used for object-oriented software. Entity-Relationship diagrams are used in structured analysis and conceptual modeling. Object-oriented data models are typically depicted using Unified Modeling Language (UML) class diagrams. Unified Modeling Language (UML) is a language based on OO concepts that describes a set of diagrams and symbols that can be used to graphically model a system. UML class diagrams are used to represent data and their relationships within the larger UML object-oriented system’s modeling language.
Database Systems Handbook BY: MUHAMMAD SHARIF 45 Associations UML uses Boolean attributes instead of unary relationships but allows relationships of all other entities. Optionally, each association may be given at most one name. Association names normally start with a capital letter. Binary associations are depicted as lines between classes. Association lines may include elbows to assist with layout or when needed (e.g., for ring relationships). ER Diagram and Class Diagram Synchronization Sample Supporting the synchronization between ERD and Class Diagram. You can transform the system design from the data model to the Class model and vice versa, without losing its persistent logic. Conversions of Terminology of UML and ERD
Database Systems Handbook BY: MUHAMMAD SHARIF 46 Relational Data Model and its Main Evolution Inclusion ER Model is the Class diagram of the UML Series.
Database Systems Handbook BY: MUHAMMAD SHARIF 47 ER Notation Comparison with UML and Their relationship ER Construct Notation Relationships
Database Systems Handbook BY: MUHAMMAD SHARIF 48
Database Systems Handbook BY: MUHAMMAD SHARIF 49  Rest ER Construct Notation Comparison
Database Systems Handbook BY: MUHAMMAD SHARIF 50 Appropriate Er Model Design Naming Conventions Guideline 1 Nouns => Entity, object, relation, table_name. Verbs => Indicate relationship_types. Common Nouns=> A common noun (such as student and employee) in English corresponds to an entity type in an ER diagram: Proper Nouns=> Proper nouns are entities. e.g. John, Singapore, New York City. Note: A relational database uses relations or two-dimensional tables to store information.
Database Systems Handbook BY: MUHAMMAD SHARIF 51
Database Systems Handbook BY: MUHAMMAD SHARIF 52 Types of Attributes- In ER diagram, attributes associated with an entity set may be of the following types- 1. Simple attributes/atomic attributes/Static attributes 2. Key attribute 3. Unique attributes 4. Stored attributes 5. Prime attributes 6. Derived attributes (DOB, AGE, Oval is a derived attribute) 7. Composite attribute (Address (street, door#, city, town, country)) 8. The multivalued attribute (double ellipse (Phone#, Hobby, Degrees)) 9. Dynamic Attributes 10. Boolean attributes The fundamental new idea in the MOST model is the so-called dynamic attributes. Each attribute of an object class is classified to be either static or dynamic. A static attribute is as usual. A dynamic attribute changes its value with time automatically. Attributes of the database tables which are candidate keys of the database tables are called prime attributes.
Database Systems Handbook BY: MUHAMMAD SHARIF 53 Symbols of Attributes: The Entity The entity is the basic building block of the E-R data model. The term entity is used in three different meanings or for three different terms and are: Entity type Entity instance Entity set
Database Systems Handbook BY: MUHAMMAD SHARIF 54 Technical Types of Entity:  Tangible Entity: Tangible Entities are those entities that exist in the real world physically. Example: Person, car, etc.  Intangible Entity: Intangible (Concepts) Entities are those entities that exist only logically and have no physical existence. Example: Bank Account, etc. Major of entity types 1. Strong Entity Type 2. Weak Entity Type 3. Naming Entity 4. Characteristic entities 5. Dependent entities 6. Independent entities Details of entity types An entity type whose instances can exist independently, that is, without being linked to the instances of any other entity type is called a strong entity type. A weak entity can be identified uniquely only by considering the primary key of another (owner) entity. The owner entity set and weak entity set must participate in a one-to-many relationship set (one owner, many weak entities). The weak entity set must have total participation in this identifying relationship set. Weak entities have only a “partial key” (dashed underline), When the owner entity is deleted, all owned weak entities must also be deleted Types Following are some recommendations for naming entity types. Singular nouns are recommended, but still, plurals can also be used Organization-specific names, like a customer, client, owner anything will work Write in capitals, yes, this is something that is generally followed, otherwise will also work. Abbreviations can be used, be consistent. Avoid using confusing abbreviations, if they are confusing for others today, tomorrow they will confuse you too. Database Design Tools Some commercial products are aimed at providing environments to support the DBA in performing database design. These environments are provided by database design tools, or sometimes as part of a more general class of products known as computer-aided software engineering (CASE) tools. Such tools usually have some components, choose from the following kinds. It would be rare for a single product to offer all these capabilities. 1. ER Design Editor 2. ER to Relational Design Transformer 3. FD to ER Design Transformer 4. Design Analyzers ER Modeling Rules to design database Three components: 1. Structural part - set of rules applied to the construction of the database 2. Manipulative part - defines the types of operations allowed on the data 3. Integrity rules - ensure the accuracy of the data
Database Systems Handbook BY: MUHAMMAD SHARIF 55 Step1: DFD Data Flow Model Data flow diagrams: the most common tool used for designing database systems is a data flow diagram. It is used to design systems graphically and expresses different system detail at different DFD levels. Characteristics DFDs show the flow of data between different processes or a specific system. DFDs are simple and hide complexities. DFDs are descriptive and links between processes describe the information flow. DFDs are focused on the flow of information only. Data flows are pipelines through which packets of information flow. DBMS applications store data as a file. RDBMS applications store data in a tabular form. In the file system approach, there is no concept of data models exists. It mostly consists of different types of files like mp3, mp4, txt, doc, etc. that are grouped into directories on a hard drive. Collection of logical constructs used to represent data structure and relationships within the database. A data flow diagram shows the way information flows through a process or system. It includes data inputs and outputs, data stores, and the various subprocesses the data moves through. Symbols used in DFD Dataflow => Arrow symbol Data store => It is expressed with a rectangle open on the right width and the left width of the rectangle drawn with double lines. Processes => Circle or near squire rectangle DFD-process => Numbered DFD processes circle and rectangle by passing a line above the center of the circle or rectangle To create DFD following steps: 1. Create a list of activities 2. Construct Context Level DFD (external entities, processes) 3. Construct Level 0 DFD (manageable sub-process) 4. Construct Level 1- n DFD (actual data flows and data stores) Types of DFD 1. Context diagram 2. Level 0,1,2 diagrams 3. Detailed diagram 4. Logical DFD 5. Physical DFD Context diagrams are the most basic data flow diagrams. They provide a broad view that is easily digestible but offers little detail. They always consist of a single process and describe a single system. The only process displayed in the CDFDs is the process/system being analyzed. The name of the CDFDs is generally a Noun Phrase.
Database Systems Handbook BY: MUHAMMAD SHARIF 56 Example Context DFD Diagram In the context level, DFDs no data stores are created. 0-Level DFD The level 0 Diagram in the DFD is used to describe the working of the whole system. Once a context DFD has been created the level zero diagram or level ‘not’ diagram is created. The level zero diagram contains all the apparent details of the system. It shows the interaction between some processes and may include a large number of external entities. At this level, the designer must keep a balance in describing the system using the level 0 diagram. Balance means that he should give proper depth to the level 0 diagram processes. 1-level DFD In 1-level DFD, the context diagram is decomposed into multiple bubbles/processes. In this level, we highlight the main functions of the system and breakdown the high-level process of 0-level DFD into subprocesses. 2-level DFD In 2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the specific/necessary detail about the system’s functioning. Detailed DFDs are detailed enough that it doesn’t usually make sense to break them down further. Logical data flow diagrams focus on what happens in a particular information flow: what information is being transmitted, what entities are receiving that info, what general processes occur, etc. It describes the functionality of the processes that we showed briefly in the Level 0 Diagram. It means that generally detailed DFDS is expressed as the successive details of those processes for which we do not or could not provide enough details. Logical DFD Logical data flow diagram mainly focuses on the system process. It illustrates how data flows in the system. Logical DFD is used in various organizations for the smooth running of system. Like in a Banking software system, it is used to describe how data is moved from one entity to another. Physical DFD Physical data flow diagram shows how the data flow is actually implemented in the system. Physical DFD is more specific and closer to implementation.
Database Systems Handbook BY: MUHAMMAD SHARIF 57  Conceptual models are (Entity-relationship database model (ERDBD), Object-oriented model (OODBM), Record-based data model)  Implementation models (Types of Record-based logical Models are (Hierarchical database model (HDBM), Network database model (NDBM), Relational database model (RDBM)  Semi-structured Data Model (The semi-structured data model allows the data specifications at places where the individual data items of the same type may have different attribute sets. The Extensible Markup Language, also known as XML, is widely used for representing semi-structured data).
Database Systems Handbook BY: MUHAMMAD SHARIF 58 Evolution Records of Data model and types
Database Systems Handbook BY: MUHAMMAD SHARIF 59
Database Systems Handbook BY: MUHAMMAD SHARIF 60
Database Systems Handbook BY: MUHAMMAD SHARIF 61 ERD Modeling and Database table relationships What is ERD: structure or schema or logical design of database is called Entity-Relationship diagram. Category of relationships Optional relationship Mandatory relationship Types of relationships concerning degree Unary or self or recursive relationship A single entity, recursive, exists between occurrences of the same entity set Binary Two entities are associated in a relationship Ternary A ternary relationship is when three entities participate in the relationship. A ternary relationship is a relationship type that involves many many relationships between three tables. For Example: The University might need to record which teachers taught which subjects in which courses.
Database Systems Handbook BY: MUHAMMAD SHARIF 62 N-ary N-ary (many entities involved in the relationship) An N-ary relationship exists when there are n types of entities. There is one limitation of the N-ary any entities so it is very hard to convert into an entity, a rational table. A relationship between more than two entities is called an n-ary relationship. Examples of relationships R between two entities E and F Relationship Notations with entities: Because it uses diamonds for relationships, Chen notation takes up more space than Crow’s Foot notation. Chen's notation also requires symbols. Crow’s Foot has a slight learning curve. Chen notation has the following possible cardinality: One-to-One, Many-to-Many, and Many-to-One Relationships One-to-one (1:1) – both entities are associated with only one attribute of another entity One-to-many (1:N) – one entity can be associated with multiple values of another entity Many-to-one (N:1) – many entities are associated with only one attribute of another entity Many-to-many (M: N) – multiple entities can be associated with multiple attributes of another entity ER Design Issues Here, we will discuss the basic design issues of an ER database schema in the following points: 1) Use of Entity Set vs Attributes The use of an entity set or attribute depends on the structure of the real-world enterprise that is being modeled and the semantics associated with its attributes. 2) Use of Entity Set vs. Relationship Sets It is difficult to examine if an object can be best expressed by an entity set or relationship set. 3) Use of Binary vs n-ary Relationship Sets Generally, the relationships described in the databases are binary relationships. However, non-binary relationships can be represented by several binary relationships. Transforming Entities and Attributes to Relations Our ultimate aim is to transform the ER design into a set of definitions for relational tables in a computerized database, which we do through a set of transformation rules.
Database Systems Handbook BY: MUHAMMAD SHARIF 63
Database Systems Handbook BY: MUHAMMAD SHARIF 64 The first step is to design a rough schema by analyzing of requirements Normalize the ERD and remove FD from Entities to enter the final steps
Database Systems Handbook BY: MUHAMMAD SHARIF 65 Transformation Rule 1. Each entity in an ER diagram is mapped to a single table in a relational database; Transformation Rule 2. A key attribute of the entity type is represented by the primary key. All single-valued attribute becomes a column for the table Transformation Rule 3. Given an entity E with primary identify, a multivalued attributed attached to E in an ER diagram is mapped to a table of its own; Transforming Binary Relationships to Relations We are now prepared to give the transformation rule for a binary many-to-many relationship. Transformation Rule 3.5. N – N Relationships: When two entities E and F take part in a many-to-many binary relationship R, the relationship is mapped to a representative table T in the related relational
Database Systems Handbook BY: MUHAMMAD SHARIF 66 database design. The table contains columns for all attributes in the primary keys of both tables transformed from entities E and F, and this set of columns form the primary key for table T. Table T also contains columns for all attributes attached to the relationship. Relationship occurrences are represented by rows of the table, with the related entity instances uniquely identified by their primary key values as rows. Case 1: Binary Relationship with 1:1 cardinality with the total participation of an entity Total participation, i.e. min occur is 1 with double lines in total. A person has 0 or 1 passport number and the Passport is always owned by 1 person. So it is 1:1 cardinality with full participation constraint from Passport. First Convert each entity and relationship to tables. Case 2: Binary Relationship with 1:1 cardinality and partial participation of both entities A male marries 0 or 1 female and vice versa as well. So it is a 1:1 cardinality with partial participation constraint from both. First Convert each entity and relationship to tables. Male table corresponds to Male Entity with key as M-Id. Similarly, the Female table corresponds to Female Entity with the key as F-Id. Marry Table represents the relationship between Male and Female (Which Male marries which female). So it will take attribute M-Id from Male and F-Id from Female. Case 3: Binary Relationship with n: 1 cardinality Case 4: Binary Relationship with m: n cardinality Case 5: Binary Relationship with weak entity In this scenario, an employee can have many dependents and one dependent can depend on one employee. A dependent does not have any existence without an employee (e.g; you as a child can be dependent on your father in his company). So it will be a weak entity and its participation will always be total.
Database Systems Handbook BY: MUHAMMAD SHARIF 67 EERD design approaches Generalization is the concept that some entities are the subtypes of other more general entities. They are represented by an "is a" relationship. Faculty (ISA OR IS-A OR IS A) subtype of the employee. One method of representing subtype relationships shown below is also known as the top-down approach. Exclusive Subtype If subtypes are exclusive, one supertype relates to at most one subtype. Inclusive Subtype If subtypes are inclusive, one supertype can relate to one or more subtypes
Database Systems Handbook BY: MUHAMMAD SHARIF 68 Data abstraction in EERD levels Concepts of total and partial, subclasses and superclasses, specializations and generalizations. View level: The highest level of data abstraction like EERD. Middle level: Middle level of data abstraction like ERD The lowest level of data abstraction like Physical/internal data stored at disk/bottom level Specialization Subgrouping into subclasses (top-down approach)( HASA, HAS-A, HAS AN, HAS-AN) Inheritance – Inherit attributes and relationships from the superclass (Name, Birthdate, etc.)
Database Systems Handbook BY: MUHAMMAD SHARIF 69 Generalization Reverse processes of defining subclasses (bottom-up approach). Bring together common attributes in entities (ISA, IS-A, IS AN, IS-AN) Union Models a class/subclass with more than one superclass of distinct entity types. Attribute inheritance is selective.
Database Systems Handbook BY: MUHAMMAD SHARIF 70 Constraints on Specialization and Generalization We have four types of specialization/generalization constraints: Disjoint, total Disjoint, partial Overlapping, total Overlapping, partial Multiplicity (relationship constraint) Covering constraints whether the entities in the subclasses collectively include all entities in the superclass Note: Generalization usually is total because the superclass is derived from the subclasses. The term Cardinality has two different meanings based on the context you use.
Database Systems Handbook BY: MUHAMMAD SHARIF 71 Relationship Constraints types Cardinality ratio Specifies the maximum number of relationship instances in which each entity can participate Types 1:1, 1:N, or M:N Participation constraint Specifies whether the existence of an entity depends on its being related to another entity Types: total and partial Thus the minimum number of relationship instances in which entities can participate: thus1 for total participation, 0 for partial Diagrammatically, use a double line from relationship type to entity type There are two types of participation constraints: Total participation, i.e. min occur is 1 with double lines in total. DottedOval is a derived attribute 1. Partial Participation 2. Total Participation When we require all entities to participate in the relationship (total participation), we use double lines to specify. (Every loan has to have at least one customer)
Database Systems Handbook BY: MUHAMMAD SHARIF 72 It expresses some entity occurrences associated with one occurrence of the related entity=>The specific. The cardinality of a relationship is the number of instances of entity B that can be associated with entity A. There is a minimum cardinality and a maximum cardinality for each relationship, with an unspecified maximum cardinality being shown as N. Cardinality limits are usually derived from the organization's policies or external constraints. For Example: At the University, each Teacher can teach an unspecified maximum number of subjects as long as his/her weekly hours do not exceed 24 (this is an external constraint set by an industrial award). Teachers may teach 0 subjects if they are involved in non-teaching projects. Therefore, the cardinality limits for TEACHER are (O, N). The University's policies state that each Subject is taught by only one teacher, but it is possible to have Subjects that have not yet been assigned a teacher. Therefore, the cardinality limits for SUBJECT are (0,1). Teacher and subject
Database Systems Handbook BY: MUHAMMAD SHARIF 73 have M: N relationship connectivity. And they are binary (two) ternary too if we break this relationship. Such situations are modeled using a composite entity (or gerund) Cardinality Constraint: Quantification of the relationship between two concepts or classes (a constraint on aggregation) Remember cardinality is always a relationship to another thing. Max Cardinality(Cardinality) Always 1 or Many. Class A has a relationship to Package B with a cardinality of one, which means at most there can be one occurrence of this class in the package. The opposite could be a Package that has a Max Cardinality of N, which would mean there can be N number of classes Min Cardinality(Optionality) Simply means "required." Its always 0 or 1. 0 would mean 0 or more, 1 or more The three types of cardinality you can define for a relationship are as follows: Minimum Cardinality. Governs whether or not selecting items from this relationship is optional or required. If you set the minimum cardinality to 0, selecting items is optional. If you set the minimum cardinality to greater than 0, the user must select that number of items from the relationship. Optional to Mandatory, Optional to Optional, Mandatory to Optional, Mandatory to Mandatory Summary Of ER Diagram Symbols Maximum Cardinality. Sets the maximum number of items that the user can select from a relationship. If you set the minimum cardinality to greater than 0, you must set the maximum cardinality to a number at least as large If you do not enter a maximum cardinality, the default is 999. Type of Max Cardinality: 1 to 1, 1 to many, many to many, many to 1 Default Cardinality. Specifies what quantity of the default product is automatically added to the initial solution that the user sees. Default cardinality must be equal to or greater than the minimum cardinality and must be less than or equal to the maximum cardinality. Replaces cardinality ratio numerals and single/double line notation Associate a pair of integer numbers (min, max) with each participant of an entity type E in a relationship type R, where 0 ≤ min ≤ max and max ≥ 1 max=N => finite, but unbounded Relationship types can also have attributes Attributes of 1:1 or 1:N relationship types can be migrated to one of the participating entity types For a 1:N relationship type, the relationship attribute can be migrated only to the entity type on the N-side of the relationship Attributes on M: N relationship types must be specified as relationship attributes In the case of Data Modelling, Cardinality defines the number of attributes in one entity set, which can be associated with the number of attributes of other sets via a relationship set. In simple words, it refers to the relationship one table can have with the other table. They can be One-to-one, One-to-many, Many-to-one, or Many-to-many. And third may be the number of tuples in a relation. In the case of SQL, Cardinality refers to a number. It gives the number of unique values that appear in the table for a particular column. For eg: you have a table called Person with the column Gender. Gender column can have values either 'Male' or 'Female''. cardinality is the number of tuples in a relation (number of rows). The Multiplicity of an association indicates how many objects the opposing class of an object can be instantiated. When this number is variable then the. Multiplicity Cardinality + Participation dictionary definition of cardinality is the number of elements in a particular set or other.
Database Systems Handbook BY: MUHAMMAD SHARIF 74 Multiplicity can be set for attribute operations and associations in a UML class diagram (Equivalent to ERD) and associations in a use case diagram. A cardinality is how many elements are in a set. Thus, a multiplicity tells you the minimum and maximum allowed members of the set. They are not synonymous. Given the example below: 0-1 ---------- 1- Multiplicities: The first multiplicity, for the left entity: 0-1 The second multiplicity, for the right entity: 1- Cardinalities for the first multiplicity: Lower cardinality: 0 Upper cardinality: 1 Cardinalities for the second multiplicity: Lower cardinality: 1 Upper cardinality: Multiplicity is the constraint on the collection of the association objects whereas Cardinality is the count of the objects that are in the collection. The multiplicity is the cardinality constraint. A multiplicity of an event = Participation of an element + cardinality of an element. UML uses the term Multiplicity, whereas Data Modelling uses the term Cardinality. They are for all intents and purposes, the same. Cardinality (sometimes referred to as Ordinality) is what is used in ER modeling to "describe" a relationship between two Entities. Cardinality and Modality The maindifference between cardinality and modality is that cardinality is defined as the metric used to specify the number of occurrences of one object related to the number of occurrences of another object. On the contrary, modality signifies whether a certain data object must participate in the relationship or not. Cardinality refers to the maximum number of times an instance in one entity can be associated with instances in the related entity. Modality refers to the minimum number of times an instance in one entity can be associated with an instance in the related entity. Cardinality can be 1 or Many and the symbol is placed on the outside ends of the relationship line, closest to the entity, Modality can be 1 or 0 and the symbol is placed on the inside, next to the cardinality symbol. For a cardinality of 1, a straight line is drawn. For a cardinality of Many a foot with three toes is drawn. For a modality of 1, a straight line is drawn. For a modality of 0, a circle is drawn. zero or more 1 or more 1 and only 1 (exactly 1) Multiplicity = Cardinality + Participation Cardinality: Denotes the maximum number of possible relationship occurrences in which a certain entity can participate (in simple terms: at most). Note: Connectivity and Modality/ multiplicity/ Cardinality and Relationship are same terms.
Database Systems Handbook BY: MUHAMMAD SHARIF 75 Participation: Denotes if all or only some entity occurrences participate in a relationship (in simple terms: at least). BASIS FOR COMPARISON CARDINALITY MODALITY Basic A maximum number of associations between the table rows. A minimum number of row associations. Types One-to-one, one-to-many, many-to-many. Nullable and not nullable.
Database Systems Handbook BY: MUHAMMAD SHARIF 76 Generalization is like a bottom-up approach in which two or more entities of lower levels combine to form a higher level entity if they have some attributes in common. Generalization is more like a subclass and superclass system, but the only difference is the approach. Generalization uses the bottom-up approach. Like subclasses are combined to make a superclass. IS-A, ISA, IS A, IS AN, IS-AN Approach is used in generalization Generalization is the result of taking the union of two or more (lower level) entity types to produce a higher level entity type. Generalization is the same as UNION. Specialization is the same as ISA. A specialization is a top-down approach, and it is the opposite of Generalization. In specialization, one higher-level entity can be broken down into two lower-level entities. Specialization is the result of taking a subset of a higher- level entity type to form a lower-level entity type. Normally, the superclass is defined first, the subclass and its related attributes are defined next, and the relationship set is then added. HASA, HAS-A, HAS AN, HAS-AN.
Database Systems Handbook BY: MUHAMMAD SHARIF 77 UML to EER specialization or generalization comes in the form of hierarchical entity set:
Database Systems Handbook BY: MUHAMMAD SHARIF 78 Transforming EERD to Relational Database Model
Database Systems Handbook BY: MUHAMMAD SHARIF 79 Specialization / Generalization Lattice Example (UNIVERSITY) EERD TO Relational Model
Database Systems Handbook BY: MUHAMMAD SHARIF 80 Mapping Process 1. Create tables for all higher-level entities. 2. Create tables for lower-level entities. 3. Add primary keys of higher-level entities in the table of lower-level entities. 4. In lower-level tables, add all other attributes of lower-level entities. 5. Declare the primary key of the higher-level table and the primary key of the lower-level table. 6. Declare foreign key constraints. This section presents the concept of entity clustering, which abstracts the ER schema to such a degree that the entire schema can appear on a single sheet of paper or a single computer screen. END
Database Systems Handbook BY: MUHAMMAD SHARIF 81 CHAPTER 4 DISCOVERING BUSINESS RULES AND DATABASE CONSTRAINTS Overview of Database Constraints Definition of Data integrity Constraints placed on the set of values allowed for the attributes of relation as relational Integrity. Constraints– These are special restrictions on allowable values. For example, the Passing marks for a student must always be greater than 50%. Categories of Constraints Constraints on databases can generally be divided into three main categories: 1. Constraints that are inherent in the data model. We call these inherent model-based constraints or implicit constraints. 2. Constraints that can be directly expressed in schemas of the data model, typically by specifying them in the DDL (data definition language, we call these schema-based constraints or explicit constraints. 3. Constraints that cannot be directly expressed in the schemas of the data model, and hence must be expressed and enforced by the application programs. We call these application-based or semantic constraints or business rules. Types of data integrity 1. Physical Integrity Physical integrity is the process of ensuring the wholeness, correctness, and accuracy of data when data is stored and retrieved. 2. Logical integrity Logical integrity refers to the accuracy and consistency of the data itself. Logical integrity ensures that the data makes sense in its context. Types of logical integrity 1. Entity integrity 2. Domain integrity The model-based constraints or implicit include domain constraints, key constraints, entity integrity constraints, and referential integrity constraints. Domain constraints can be violated if an attribute value is given that does not appear in the corresponding domain or is not of the appropriate data type. Key constraints can be violated if a key value in the new tuple already exists in another tuple in the relation r(R). Entity integrity can be violated if any part of the primary key of the new tuple t is NULL. Referential integrity can be violated if the value of any foreign key in t refers to a tuple that does not exist in the referenced relation. Note: Insertions Constraints and constraints on NULLs are called explicit. Insert can violate any of the four types of constraints discussed in the implicit constraints. 1. Business Rule or default relation constraints These rules are applied to data before (first) the data is inserted into the table columns. For example, Unique, Not NULL, Default constraints. 1. The primary key value can’t be null. 2. Not null (absence of any value (i.e., unknown or nonapplicable to a tuple) 3. Unique 4. Primary key 5. Foreign key 6. Check 7. Default
Database Systems Handbook BY: MUHAMMAD SHARIF 82 2. Null Constraints Comparisons Involving NULL and Three-Valued Logic: SQL has various rules for dealing with NULL values. Recall from Section 3.1.2 that NULL is used to represent a missing value, but that it usually has one of three different interpretations—value unknown (exists but is not known), value not available (exists but is purposely withheld), or value not applicable (the attribute is undefined for this tuple). Consider the following examples to illustrate each of the meanings of NULL. 1. Unknownalue. A person’s date of birth is not known, so it is represented by NULL in the database. 2. Unavailable or withheld value. A person has a home phone but does not want it to be listed, so it is withheld and represented as NULL in the database. 3. Not applicable attribute. An attribute Last_College_Degree would be NULL for a person who has no college degrees because it does not apply to that person. 3. Enterprise Constraints Enterprise constraints – sometimes referred to as semantic constraints – are additional rules specified by users or database administrators and can be based on multiple tables. Here are some examples. A class can have a maximum of 30 students. A teacher can teach a maximum of four classes per semester. An employee cannot take part in more than five projects. The salary of an employee cannot exceed the salary of the employee’s manager. 4. Key Constraints or Uniqueness Constraints : These are called uniqueness constraints since it ensures that every tuple in the relation should be unique. A relation can have multiple keys or candidate keys(minimal superkey), out of which we choose one of the keys as primary key, we don’t have any restriction on choosing the primary key out of candidate keys, but it is suggested to go with the candidate key with less number of attributes. Null values are not allowed in the primary key, hence Not Null constraint is also a part of key constraint.
Database Systems Handbook BY: MUHAMMAD SHARIF 83 5. Domain, Field, Row integrity Constraints Domain Integrity: A domain of possible values must be associated with every attribute (for example, integer types, character types, date/time types). Declaring an attribute to be of a particular domain act as the constraint on the values that it can take. Domain Integrity rules govern the values. In the specific field/cell values must be with in column domain and represent a specific location within at table In a database system, the domain integrity is defined by: 1. The datatype and the length 2. The NULL value acceptance 3. The allowable values, through techniques like constraints or rules the default value. Some examples of Domain Level Integrity are mentioned below;  Data Type– For example integer, characters, etc.  Date Format– For example dd/mm/yy or mm/dd/yyyy or yy/mm/dd.  Null support– Indicates whether the attribute can have null values.  Length– Represents the length of characters in a value.  Range– The range specifies the lower and upper boundaries of the values the attribute may legally have. Entity integrity: No attribute of a primary key can be null (every tuple must be uniquely identified) 6. Referential Integrity Constraints A referential integrity constraint is famous as a foreign key constraint. The value of foreign key values is derived from the Primary key of another table. Similar options exist to deal with referential integrity violations caused by Update as those options discussed for the Delete operation. There are two types of referential integrity constraints:  Insert Constraint: We can’t inert value in CHILD Table if the value is not stored in MASTER Table  Delete Constraint: We can’t delete a value from MASTER Table if the value is existing in CHILD Table The three rules that referential integrity enforces are: 1. A foreign key must have a corresponding primary key. (“No orphans” rule.) 2. When a record in a primary table is deleted, all related records referencing the primary key must also be deleted, which is typically accomplished by using cascade delete. 3. If the primary key for record changes, all corresponding records in other tables using the primary key as a foreign key must also be modified. This can be accomplished by using a cascade update. 7. Assertions constraints An assertion is any condition that the database must always satisfy. Domain constraints and Integrity constraints are special forms of assertions.
Database Systems Handbook BY: MUHAMMAD SHARIF 84 8. Authorization constraints We may want to differentiate among the users as far as the type of access they are permitted to various data values in the database. This differentiation is expressed in terms of Authorization. The most common being: Read authorization – which allows reading but not the modification of data; Insert authorization – which allows the insertion of new data but not the modification of existing data Update authorization – which allows modification, but not deletion.
Database Systems Handbook BY: MUHAMMAD SHARIF 85 9. Preceding integrity constraints Preceding integrity constraints are included in the data definition language because they occur in most database applications. However, they do not include a large class of general constraints, sometimes called semantic integrity constraints, which may have to be specified and enforced on a relational database. The types of constraints we discussed so far may be called state constraints because they define the constraints that a valid state of the database must satisfy. Another type of constraint, called transition constraints, can be defined to deal with state changes in the database. An example of a transition constraint is: “the salary of an employee can only increase.” What is the use of data constraints? Constraints are used to: Avoid bad data being entered into tables. At the database level, it helps to enforce business logic. Improves database performance. Enforces uniqueness and avoid redundant data to the database. END
Database Systems Handbook BY: MUHAMMAD SHARIF 86 CHAPTER 5 DATABASE DESIGN STEPS AND IMPLEMENTATIONS SQL version:  1970 – Dr. Edgar F. “Ted” Codd described a relational model for databases.  1974 – Structured Query Language appeared.  1978 – IBM released a product called System/R.  1986 – SQL1 IBM developed the prototype of a relational database, which is standardized by ANSI.  1989- First minor changes but not standards changed  1992 – SQL2 launched with features like triggers, object orientation, etc.  SQL1999 to 2003- SQL3 launched  SQL2006- Support for XML Query Language  SQL2011-improved support for temporal databases  SQL-86 in 1986, the most recent version in 2011 (SQL:2016). SQL-86 The first SQL standard was SQL-86. It was published in 1986 as ANSI standard and in 1987 as International Organization for Standardization (ISO) standard. The starting point for the ISO standard was IBM’s SQL standard implementation. This version of the SQL standard is also known as SQL 1. SQL-89 The next SQL standard was SQL-89, published in 1989. This was a minor revision of the earlier standard, a superset of SQL-86 that replaced SQL-86. The size of the standard did not change. SQL-92 The next revision of the standard was SQL-92 – and it was a major revision. The language introduced by SQL-92 is sometimes referred to as SQL 2. The standard document grew from 120 to 579 pages. However, much of the growth was due to more precise specifications of existing features. The most important new features were:
Database Systems Handbook BY: MUHAMMAD SHARIF 87 An explicit JOIN syntax and the introduction of outer joins: LEFT JOIN, RIGHT JOIN, FULL JOIN. The introduction of NATURAL JOIN and CROSS JOIN SQL:1999 SQL:1999 (also called SQL 3) was the fourth revision of the SQL standard. Starting with this version, the standard name used a colon instead of a hyphen to be consistent with the names of other ISO standards. This standard was published in multiple installments between 1999 and 2002. In 1993, the ANSI and ISO development committees decided to split future SQL development into a multi-part standard. The first installment of 1995 and SQL:1999 had many parts: Part 1: SQL/Framework (100 pages) defined the fundamental concepts of SQL. Part 2: SQL/Foundation (1050 pages) defined the fundamental syntax and operations of SQL: types, schemas, tables, views, query and update statements, expressions, and so forth. This part is the most important for regular SQL users. Part 3: SQL/CLI (Call Level Interface) (514 pages) defined an application programming interface for SQL. Part 4: SQL/PSM (Persistent Stored Modules) (193 pages) defined extensions that make SQL procedural. Part 5: SQL/Bindings (270 pages) defined methods for embedding SQL statements in application programs written in a standard programming language. SQL/Bindings. The Dynamic SQL and Embedded SQL bindings are taken from SQL-92. No active new work at this time, although C++ and Java interfaces are under discussion. Part 6: SQL/XA. An SQL specialization of the popular XA Interface developed by X/Open (see below). Part 7: SQL/Temporal. A newly approved SQL subproject to develop enhanced facilities for temporal data management using SQL. Part 8: SQL Multimedia (SQL/Mm) A new ISO/IEC international standardization project for the development of an SQL class library for multimedia applications was approved in early 1993. This new standardization activity, named SQL Multimedia (SQL/MM), will specify packages of SQL abstract data type (ADT) definitions using the facilities for ADT specification and invocation provided in the emerging SQL3 specification. SQL:2006 further specified how to use SQL with XML. It was not a revision of the complete SQL standard, just Part 14, which deals with SQL-XML interoperability. The current SQL standard is SQL:2019. It added Part 15, which defines multidimensional array support in SQL. SQL:2003 and beyond In the 21st century, the SQL standard has been regularly updated. The SQL:2003 standard was published on March 1, 2004. Its major addition was window functions, a powerful analytical feature that allows you to compute summary statistics without collapsing rows. Window functions significantly increased the expressive power of SQL. They are extremely useful in preparing all kinds of business reports, analyzing time series data, and analyzing trends. The addition of window functions to the standard coincided with the popularity of OLAP and data warehouses. People started using databases to make data-driven business decisions. This trend is only gaining momentum, thanks to the growing amount of data that all businesses collect. You can learn window functions with our Window Functions course. (Read about the course or why it’s worth learning SQL window functions here.) SQL:2003 also introduced XML-related functions, sequence generators, and identity columns. Conformance with Standard SQL This section declares Oracle's conformance to the SQL standards established by these organizations: 1. American National Standards Institute (ANSI) in 1986.
Database Systems Handbook BY: MUHAMMAD SHARIF 88 2. International Standards Organization (ISO) in 1987. 3. United States Federal Government Federal Information Processing Standards (FIPS) Standard of SQL ANSI and ISO and FIPS
Database Systems Handbook BY: MUHAMMAD SHARIF 89 Dynamic SQL or Extended SQL (Extended SQL called SQL3 OR SQL-99) ODBC, however, is a call level interface (CLI) that uses a different approach. Using a CLI, SQL statements are passed to the database management system (DBMS) within a parameter of a runtime API. Because the text of the SQL statement is never known until runtime, the optimization step must be performed each time an SQL statement is run. This approach commonly is referred to as dynamic SQL. The simplest way to execute a dynamic SQL statement is with an EXECUTE IMMEDIATE statement. This statement passes the SQL statement to the DBMS for compilation and execution. Static SQL or Embedded SQL Static or Embedded SQL are SQL statements in an application that do not change at runtime and, therefore, can be hard-coded into the application. This is a central idea of embedded SQL: placing SQL statements in a program written in a host programming language. The embedded SQL shown in Embedded SQL Example is known as static SQL. Traditional SQL interfaces used an embedded SQL approach. SQL statements were placed directly in an application's source code, along with high-level language statements written in C, COBOL, RPG, and other programming languages. The source code then was precompiled, which translated the SQL statements into code that the subsequent compile step could process. This method is referred to as static SQL. One performance advantage to this approach is that SQL statements were optimized at the time the high-level program was compiled, rather than at runtime while the user was waiting. Static SQL statements in the same program are treated normally.
Database Systems Handbook BY: MUHAMMAD SHARIF 90
Database Systems Handbook BY: MUHAMMAD SHARIF 91 Common Table Expressions (CTE) Common table expressions (CTEs) enable you to name subqueries temporarily for a result set. You then refer to these like normal tables elsewhere in your query. This can make your SQL easier to write and understand later. CTEs go in with the clause above the select statement.
Database Systems Handbook BY: MUHAMMAD SHARIF 92 Recursive common table expression (CTE) RCTE is a CTE that references itself. By doing so, the CTE repeatedly executes, and returns subsets of data, until it returns the complete result set. A recursive CTE is useful in querying hierarchical data such as organization charts where one employee reports to a manager or a multi-level bill of materials when a product consists of many components, and each component itself also consists of many other components. Query-By-Example (QBE) Query-By-Example (QBE) is the first interactive database query language to exploit such modes of HCI. In QBE, a query is constructed on an interactive terminal involving two-dimensional ‘drawings’ of one or more relations, visualized in tabular form, which are filled in selected columns with ‘examples’ of data items to be retrieved (thus the phrase query-by-example). It is different from SQL, and from most other database query languages, in having a graphical user interface that allows users to write queries by creating example tables on the screen. QBE, like SQL, was developed at IBM and QBE is an IBM trademark, but a number of other companies sell QBE-like interfaces, including Paradox. A convenient shorthand notation is that if we want to print all fields in some relation, we can place P. under the name of the relation. This notation is like the SELECT * convention in SQL. It is equivalent to placing a P. in every field:
Database Systems Handbook BY: MUHAMMAD SHARIF 93 Example of QBE: AND, OR Conditions in QBE
Database Systems Handbook BY: MUHAMMAD SHARIF 94 Key characteristics of SQL Set-oriented and declarative Free-form language Case insensitive Can be used both interactively from a command prompt or executed by a program Rules to write commands:  Table names cannot exceed 20 characters.  The name of the table must be unique.  Field names also must be unique.  The field list and filed length must be enclosed in parentheses.  The user must specify the field length and type.  The field definitions must be separated with commas.  SQL statements must end with a semicolon.
Database Systems Handbook BY: MUHAMMAD SHARIF 95
Database Systems Handbook BY: MUHAMMAD SHARIF 96 Database Design Phases/Stages
Database Systems Handbook BY: MUHAMMAD SHARIF 97
Database Systems Handbook BY: MUHAMMAD SHARIF 98
Database Systems Handbook BY: MUHAMMAD SHARIF 99
Database Systems Handbook BY: MUHAMMAD SHARIF 100
Database Systems Handbook BY: MUHAMMAD SHARIF 101 III. Physical design. The physical design step involves the selection of indexes (access methods), partitioning, and clustering of data. The logical design methodology in step II simplifies the approach to designing large relational databases by reducing the number of data dependencies that need to be analyzed. This is accomplished by inserting conceptual data modeling and integration steps (II(a) and II(b) of pictures into the traditional relational design approach. IV. Database implementation, monitoring, and modification. Once thedesign is completed, and the database can be created through the implementation of the formal schema using the data definition language (DDL) of a DBMS.
Database Systems Handbook BY: MUHAMMAD SHARIF 102 General Properties of Database Objects Entity Distinct object, Class, Table, Relation Entity Set A collection of similar entities. E.g., all employees. All entities in an entity set have the same set of attributes. Attribute Describes some aspect of the entity/object, characteristics of object. An attribute is a data item that describes a property of an entity or a relationship Column or field The column represents the set of values for a specific attribute. An attribute is for a model and a column is for a table, a column is a column in a database table whereas attribute(s) are externally visible facets of an object. A relation instance is a finite set of tuples in the RDBMS system. Relation instances never have duplicate tuples. Relationship Association between entities, connected entities are called participants, Connectivity describes the relationship (1-1, 1-M, M-N) The degree of a relationship refers to the=> number of entities Following the relation in above image consist degree=4, 5=cardinality, data values/cells = 20.
Database Systems Handbook BY: MUHAMMAD SHARIF 103 Characteristics of relation 1. Distinct Relation/table name 2. Relations are unordered 3. Cells contain exactly one atomic (Single) value means Each cell (field) must contain a single value 4. No repeating groups 5. Distinct attributes name 6. Value of attribute comes from the same domain 7. Order of attribute has no significant 8. The attributes in R(A1, ...,An) and the values in t = <V1,V2, ..... , Vn> are ordered. 9. Each tuple is a distinct 10. order of tuples that has no significance. 11. tuples may be stored and retrieved in an arbitrary order 12. Tables manage attributes. This means they store information in form of attributes only 13. Tables contain rows. Each row is one record only 14. All rows in a table have the same columns. Columns are also called fields 15. Each field has a data type and a name 16. A relation must contain at least one attribute (column) that identifies each tuple (row) uniquely Database Table type Temporary table Here are RDBMS, which supports temporary tables. Temporary Tables are a great feature that lets you store and process intermediate results by using the same selection, update, and join capabilities of tables. Temporary tables store session-specific data. Only the session that adds the rows can see them. This can be handy to store working data. In ANSI there are two types of temp tables. There are two types of temporary tables in the Oracle Database: global and private. Global Temporary Tables To create a global temporary table add the clause "global temporary" between create and table. For Example: create global temporary table toys_gtt ( toy_name varchar2(100)); The global temp table is accessible to everyone. Global, you create this and it is registered in the data dictionary, it lives "forever". the global pertains to the schema definition Private/Local Temporary Tables Starting in Oracle Database 18c, you can create private temporary tables. These tables are only visible in your session. Other sessions can't see the table! The temporary tables could be very useful in some cases to keep temporary data. Local, it is created "on the fly" and disappears after its use. you never see it in the data dictionary. Details of temp tables: A temporary table is owned by the person who created it and can only be accessed by that user. A global temporary table is accessible to everyone and will contain data specific to the session using it; multiple sessions can use the same global temporary table simultaneously. It is a global definition for a temporary table that all can benefit from. Local temporary table – These tables are invisible when there is a connection and are deleted when it is closed. Clone Table Temporary tables are available in MySQL version 3.23 onwards There may be a situation when you need an exact copy of a table and the CREATE TABLE . or the SELECT. commands do not suit your purposes because the copy must include the same indexes, default values, and so forth.
Database Systems Handbook BY: MUHAMMAD SHARIF 104 There are Magic Tables (virtual tables) in SQL Server that hold the temporal information of recently inserted and recently deleted data in the virtual table. The INSERTED magic table stores the before version of the row, and the DELETED table stores the after version of the row for any INSERT, UPDATE, or DELETE operations. A record is a collection of data objects that are kept in fields, each having its name and datatype. A Record can be thought of as a variable that can store a table row or a set of columns from a table row. Table columns relate to the fields. External Tables An external table is a read-only table whose metadata is stored in the database but whose data is stored outside the database.
Database Systems Handbook BY: MUHAMMAD SHARIF 105
Database Systems Handbook BY: MUHAMMAD SHARIF 106
Database Systems Handbook BY: MUHAMMAD SHARIF 107 Partitioning Tables and Table Splitting Partitioning logically splits up a table into smaller tables according to the partition column(s). So rows with the same partition key are stored in the same physical location.
Database Systems Handbook BY: MUHAMMAD SHARIF 108 Data Partitioning horizontal (Table rows) Horizontal partitioning divides a table into multiple tables that contain the same number of columns, but fewer rows. Table partitioning vertically (Table columns) Vertical partitioning splits a table into two or more tables containing different columns.
Database Systems Handbook BY: MUHAMMAD SHARIF 109
Database Systems Handbook BY: MUHAMMAD SHARIF 110 Collections Records All items are of the same data type All items are different data types Same data type items are called elements Different data type items are called fields Syntax: variable_name(index) Syntax: variable_name.field_name For creating a collection variable you can use %TYPE For creating a record variable you can use %ROWTYPE or %TYPE Lists and arrays are examples Tables and columns are examples Correlated vs. Uncorrelated SQL Expressions A subquery is correlated when it joins to a table from the parent query. If you don't, then it's uncorrelated. This leads to a difference between IN and EXISTS. EXISTS returns rows from the parent query, as long as the subquery finds at least one row. So the following uncorrelated EXISTS returns all the rows in colors: select from colors where exists ( select null from bricks); Table Organizations Create a table in Oracle Database that has an organization clause. This defines how it physically stores rows in the table. The options for this are: 1. Heap table organization (Some DBMS provide for tables to be created without indexes, and access data randomly) 2. Index table organization or Index Sequential table. 3. Hash table organization (Some DBMS provide an alternative to an index to access data by trees or hashing key or hashing function). By default, tables are heap-organized. This means the database is free to store rows wherever there is space. You can add the "organization heap" clause if you want to be explicit.
Database Systems Handbook BY: MUHAMMAD SHARIF 111 Big picture of database languages and command types Embedded DML are of two types Low-level or Procedural DMLs: require a user to specify what data are needed and how to get those data. PLSQL, Java, and Relational Algebra are the best examples. It can be used for query optimization. High-level or Declarative DMLs (also referred to as non-procedural DMLs): require a user to specify what data are needed without specifying how to get those data. SQL or Google Search are the best examples. It is not suitable for query optimization. TRC and DRC are declarative languages.
Database Systems Handbook BY: MUHAMMAD SHARIF 112
Database Systems Handbook BY: MUHAMMAD SHARIF 113
Database Systems Handbook BY: MUHAMMAD SHARIF 114
Database Systems Handbook BY: MUHAMMAD SHARIF 115 Other SQL clauses used during Query evaluation  Windowing Clause When you use order by, the database adds a default windowing clause of range between unbounded preceding and current row.  Sliding Windows As well as running totals so far, you can change the windowing clause to be a subset of the previous rows. The following shows the total weight of: 1. The current row + the previous row 2. All rows with the same weight as the current + all rows with a weight one less than the current Strategies for Schema design in DBMS Top-down strategy – Bottom-up strategy – Inside-Out Strategy – Mixed Strategy – Identifying correspondences and conflicts among the schema integration in DBMS Naming conflict Type conflicts Domain conflicts Conflicts among constraints Process of SQL When we are executing the command of SQL on any Relational database management system, then the system automatically finds the best routine to carry out our request, and the SQL engine determines how to interpret that particular command. Structured Query Language contains the following four components in its process: 1. Query Dispatcher 2. Optimization Engines 3. Classic Query Engine 4. SQL Query Engine, etc. SQL Programming Approaches to Database Programming In this section, we briefly compare the three approaches for database programming and discuss the advantages and disadvantages of each approach. Several techniques exist for including database interactions in application programs. The main approaches for database programming are the following: 1. Embedding database commands in a general-purpose programming language. Embedded SQL Approach. The main advantage of this approach is that the query text is part of the program source code itself, and hence can be checked for syntax errors and validated against the database schema at compile time.
Database Systems Handbook BY: MUHAMMAD SHARIF 116 2. Using a library of database functions. A library of functions is made available to the host programming language for database calls. Library of Function Calls Approach. This approach provides more flexibility in that queries can be generated at runtime if needed. 3. Designing a brand-new language. A database programming language is designed from scratch to be compatible with the database model and query language. Database Programming Language Approach. This approach does not suffer from the impedance mismatch problem, as the programming language data types are the same as the database data types.
Database Systems Handbook BY: MUHAMMAD SHARIF 117 Standard SQL order of execution
Database Systems Handbook BY: MUHAMMAD SHARIF 118
Database Systems Handbook BY: MUHAMMAD SHARIF 119
Database Systems Handbook BY: MUHAMMAD SHARIF 120 TYPES OF SUB QUERY (SUBQUERY) Subqueries Types 1. From Subqueries 2. Attribute List Subqueries 3. Inline subquery 4. Correlated Subqueries 5. Where Subqueries 6. IN Subqueries 7. Having Subqueries 8. Multirow Subquery Operators: ANY and ALL Scalar Subqueries Scalar subqueries return one column and at most one row. You can replace a column with a scalar subquery in most cases.
Database Systems Handbook BY: MUHAMMAD SHARIF 121 We can once again be faced with possible ambiguity among attribute names if attributes of the same name exist— one in a relation in the FROM clause of the outer query, and another in a relation in the FROM clause of the nested query. The rule is that a reference to an unqualified attribute refers to the relation declared in the innermost nested query.
Database Systems Handbook BY: MUHAMMAD SHARIF 122
Database Systems Handbook BY: MUHAMMAD SHARIF 123
Database Systems Handbook BY: MUHAMMAD SHARIF 124 Some important differences in DML statements: Difference between DELETE and TRUNCATE statements There is a slight difference b/w delete and truncate statements. The DELETE statement only deletes the rows from the table based on the condition defined by the WHERE clause or deletes all the rows from the table when the condition is not specified. But it does not free the space contained by the table. The TRUNCATE statement: is used to delete all the rows from the table and free the containing space. Difference b/w DROP and TRUNCATE statements When you use the drop statement it deletes the table's row together with the table's definition so all the relationships of that table with other tables will no longer be valid. When you drop a table Table structure will be dropped Relationships will be dropped Integrity constraints will be dropped Access privileges will also be dropped
Database Systems Handbook BY: MUHAMMAD SHARIF 125 On the other hand, when we TRUNCATE a table, the table structure remains the same, so you will not face any of the above problems. In general, ANSI SQL permits the use of ON DELETE and ON UPDATE clauses to cover CASCADE, SET NULL, or SET DEFAULT. MS Access, SQL Server, and Oracle support ON DELETE CASCADE. MS Access and SQL Server support ON UPDATE CASCADE. Oracle does not support ON UPDATE CASCADE. Oracle supports SET NULL. MS Access and SQL Server do not support SET NULL. Refer to your product manuals for additional information on referential constraints. While MS Access does not support ON DELETE CASCADE or ON UPDATE CASCADE at the SQL command-line level, Types of Multitable INSERT statements
Database Systems Handbook BY: MUHAMMAD SHARIF 126 DML before and after processing in triggers Database views and their types: The definition of views is one of the final stages in database design since it relies on the logical schema being finalized. Views are “virtual tables” that are a selection of rows and columns from one or more real tables and can include calculated values in additional virtual columns. A view is a virtual relation or one that does not exist but is dynamically derived it can be constructed by performing operations (i.e., select, project, join, etc.) on values of existing base relation (a named relation that is designed in a conceptual schema whose tuples are physically stored in the database). Views are viewable in the external schema.
Database Systems Handbook BY: MUHAMMAD SHARIF 127 Types of View 1. User-defined view a. Simple view (Single table view) b. Complex View (Multiple tables having joins, group by, and functions) c. Inline View (Based on a subquery in from clause to create a temp table and form a complex query) d. Materialized View (It stores physical data, definitions of tables) e. Dynamic view f. Static view 2. Database View 3. System Defined Views 4. Information Schema View 5. Catalog View 6. Dynamic Management View 7. Server-scoped Dynamic Management View 8. Sources of Data Dictionary Information View a. General Views b. Transaction Service Views c. SQL Service Views
Database Systems Handbook BY: MUHAMMAD SHARIF 128 Advantages of View: Provide security Hide specific parts of the database from certain users Customize base relations based on their needs It supports the external model Provide logical independence Views don't store data in a physical location. Views can provide Access Restriction, since data insertion, update, and deletion is not possible with the view. We can DML on view if it is derived from a single base relation, and contains the primary key or a candidate key When can a view be updated? 1. The view is defined based on one and only one table. 2. The view must include the PRIMARY KEY of the table based upon which the view has been created. 3. The view should not have any field made out of aggregate functions. 4. The view must not have any DISTINCT clause in its definition. 5. The view must not have any GROUP BY or HAVING clause in its definition. 6. The view must not have any SUBQUERIES in its definitions. 7. If the view you want to update is based upon another view, the latter should be updatable. 8. Any of the selected output fields (of the view) must not use constants, strings, or value expressions. END
Database Systems Handbook BY: MUHAMMAD SHARIF 129 CHAPTER 6 DATABASE NORMALIZATION AND DATABASE JOINS Quick Overview of 12 Codd's Rule Every database has tables, and constraints cannot be referred to as a rational database system. And if any database has only a relational data model, it cannot be a Relational Database System (RDBMS). So, some rules define a database to be the correct RDBMS. These rules were developed by Dr. Edgar F. Codd (E.F. Codd) in 1985, who has vast research knowledge on the Relational Model of database Systems. Codd presents his 13 rules for a database to test the concept of DBMS against his relational model, and if a database follows the rule, it is called a true relational database (RDBMS). These 12 rules are popular in RDBMS, known as Codd's 12 rules. Rule 0: The Foundation Rule The database must be in relational form. So that the system can handle the database through its relational capabilities. Rule 1: Information Rule A database contains various information, and this information must be stored in each cell of a table in the form of rows and columns. Rule 2: Guaranteed Access Rule Every single or precise data (atomic value) may be accessed logically from a relational database using the combination of primary key value, table name, and column name. Each attribute of relation has a name. Rule 3: Systematic Treatment of Null Values Nulls must be represented and treated in a systematic way, independent of data type. The null value has various meanings in the database, like missing the data, no value in a cell, inappropriate information, unknown data, and the primary key should not be null. Rule 4: Active/Dynamic Online Catalog based on the relational model It represents the entire logical structure of the descriptive database that must be stored online and is known as a database dictionary. It authorizes users to access the database and implement a similar query language to access the database. Metadata must be stored and managed as ordinary data. Rule 5: Comprehensive Data SubLanguage Rule The relational database supports various languages, and if we want to access the database, the language must be explicit, linear, or well-defined syntax, and character strings and supports the comprehensive: data definition, view definition, data manipulation, integrity constraints, and limit transaction management operations. If the database allows access to the data without any language, it is considered a violation of the database. Rule 6: View Updating Rule All views tables can be theoretically updated and must be practically updated by the database systems. Rule 7: Relational Level Operation (High-Level Insert, Update, and delete) Rule A database system should follow high-level relational operations such as insert, update, and delete in each level or a single row. It also supports the union, intersection, and minus operation in the database system. Rule 8: Physical Data Independence Rule All stored data in a database or an application must be physically independent to access the database. Each data should not depend on other data or an application. If data is updated or the physical structure of the database is changed, it will not show any effect on external applications that are accessing the data from the database. Rule 9: Logical Data Independence Rule It is similar to physical data independence. It means, that if any changes occurred to the logical level (table structures), it should not affect the user's view (application). For example, suppose a table either split into two tables, or two table joins to create a single table, these changes should not be impacted on the user view application.
Database Systems Handbook BY: MUHAMMAD SHARIF 130 Rule 10: Integrity Independence Rule A database must maintain integrity independence when inserting data into a table's cells using the SQL query language. All entered values should not be changed or rely on any external factor or application to maintain integrity. It is also helpful in making the database independent for each front-end application. Rule 11: Distribution Independence Rule The distribution independence rule represents a database that must work properly, even if it is stored in different locations and used by different end-users. Suppose a user accesses the database through an application; in that case, they should not be aware that another user uses particular data, and the data they always get is only located on one site. The end users can access the database, and these access data should be independent for every user to perform the SQL queries. Rule 12: Non-Subversion Rule The non-submersion rule defines RDBMS as a SQL language to store and manipulate the data in the database. If a system has a low-level or separate language other than SQL to access the database system, it should not subvert or bypass integrity to transform data. Normalizations Ans It is a refinement technique, it reduces redundancy and eliminates undesirable’s characteristics like insertion, updating, and deletions. Removal of anomalies and reputations. That normalization and E-R modeling are used concurrently to produce a good database design. Advantages of normalization Reduces data redundancies Expending entities Helps eliminate data anomalies Produces controlled redundancies to link tables Cost more processing efforts Series steps called normal forms
Database Systems Handbook BY: MUHAMMAD SHARIF 131 Anomalies of a bad database design The table displays data redundancies which yield the following anomalies 1. Update anomalies Changing the price of product ID 4 requires an update in several records. If data items are scattered and are not linked to each other properly, then it could lead to strange situations. 2. Insertion anomalies The new employee must be assigned a project (phantom project). We tried to insert data in a record that does not exist at all. 3. Deletion anomalies If an employee is deleted, other vital data is lost. We tried to delete a record, but parts of it were left undeleted because of unawareness, the data is also saved somewhere else. if we delete the Dining Table from Order 1006, we lose information concerning this item's finish and price Anomalies type w.r.t Database table constraints
Database Systems Handbook BY: MUHAMMAD SHARIF 132 In most cases, if you can place your relations in the third normal form (3NF), then you will have avoided most of the problems common to bad relational designs. Boyce-Codd (BCNF) and the fourth normal form (4NF) handle special situations that arise only occasionally.  1st Normal form: Normally every table before normalization has repeating groups In the first normal for conversion we do eliminate Repeating groups in table records Proper primary key developed/All attributes depends on the primary key. Uniquely identifies attribute values (rows) (Fields) Dependencies can be identified, No multivalued attributes Every attribute value is atomic A functional dependency exists when the value of one thing is fully determined by another. For example, given the relation EMP(empNo, emp name, sal), attribute empName is functionally dependent on attribute empNo. If we know empNo, we also know the empName. Types of dependencies Partial (Based on part of composite primary key) Transitive (One non-prime attribute depends on another nonprime attribute)
Database Systems Handbook BY: MUHAMMAD SHARIF 133 PROJ_NUM,EMP_NUM  PROJ_NAME, EMP_NAME, JOB_CLASS,CHG_HOUR, HOURS  2nd Normal form: Start with the 1NF format: Write each key component on a separate line Partial dependency has been ended by separating the table with its original key as a new table. Keys with their respective attributes would be a new table. Still possible to exhibit transitive dependency A relation will be in 2NF if it is in 1NF and all non-key attributes are fully functional and dependent on the primary key. No partial dependency should exist in the relation  3rd Normal form: Create a separate table(s) to eliminate transitive functional dependencies 2NF PLUS no transitive dependencies (functional dependencies on non-primary-key attributes) In 3NF no transitive functional dependency exists for non-prime attributes in a relation. It will be when a non-key attribute is dependent on a non-key attribute or a functional dependency exists between non-key attributes.  Boyce-Codd Normal Form (BCNF) 3NF table with one candidate key is already in BCNF It contains a fully functional dependency Every determinant in the table is a candidate key. BCNF is the advanced version of 3NF. It is stricter than 3NF. A table is in BCNF if every functional dependency X → Y, X is the super key of the table. For BCNF, the table should be in 3NF, and for every FD, LHS is super key.  4th Fourth normal form (4NF) A relation will be in 4NF if it is in Boyce Codd's normal form and has no multi-valued dependency. For a dependency A → B, if for a single value of A, multiple values of B exist, then the relationship will be a multi- valued dependency.  5th Fifth normal form (5NF) A relation is in 5NF if it is in 4NF and does not contain any join dependency and joining should be lossless. 5NF is satisfied when all the tables are broken into as many tables as possible to avoid redundancy. 5NF is also known as Project-join normal form (PJ/NF).
Database Systems Handbook BY: MUHAMMAD SHARIF 134 Denormalization in Databases Denormalization is a database optimization technique in which we add redundant data to one or more tables. This can help us avoid costly joins in a relational database. Note that denormalization does not mean not doing normalization. It is an optimization technique that is applied after normalization. Types of Denormalization The two most common types of denormalization are two entities in a one-to-one relationship and two entities in a one-to-many relationship. Pros of Denormalization: -
Database Systems Handbook BY: MUHAMMAD SHARIF 135 Retrieving data is faster since we do fewer joins Queries to retrieve can be simpler (and therefore less likely to have bugs), since we need to look at fewer tables. Cons of Denormalization: - Updates and inserts are more expensive. Denormalization can make an update and insert code harder to write. Data may be inconsistent. Which is the “correct” value for a piece of data? Data redundancy necessities more storage. Relational Decomposition Decomposition is used to eliminate some of the problems of bad design like anomalies, inconsistencies, and redundancy. When a relation in the relational model is not inappropriate normal form then the decomposition of a relationship is required. In a database, it breaks the table into multiple tables. Types of Decomposition 1 Lossless Decomposition If the information is not lost from the relation that is decomposed, then the decomposition will be lossless. The process of normalization depends on being able to factor or decompose a table into two or smaller tables, in such a way that we can recapture the precise content of the original table by joining the decomposed parts. 2 Lossy Decomposition Data will be lost for more decomposition of the table. Database SQL Joins Join is a combination of a Cartesian product followed by a selection process. Database join types:  Non-ANSI Format Join 1. Non-Equi join 2. Self-join 3. Equi Join / equvi join  ANSI format join 1. Semi Join 2. Left/right semi join 3. Anti Semi join 4. Bloom Join 5. Natural Join(Inner join, self join, theta join, cross join/cartesian product, conditional join) 6. Inner join (Equi and theta join/self-join) 7. Theta (θ) 8. Cross join 9. Cross products
Database Systems Handbook BY: MUHAMMAD SHARIF 136 10. Multi-join operation 11. Outer o Left outer join o Right outer join o Full outer join  Several different algorithms can be used to implement joins (natural, condition-join) 1. Nested Loops join o Simple nested loop join o Block nested loop join o Index nested loop join 2. Sort merge join/external sort join 3. Hash join
Database Systems Handbook BY: MUHAMMAD SHARIF 137
Database Systems Handbook BY: MUHAMMAD SHARIF 138 END
Database Systems Handbook BY: MUHAMMAD SHARIF 139 CHAPTER 7 FUNCTIONAL DEPENDENCIES IN THE DATABASE MANAGEMENT SYSTEM SQL Server records two types of dependency: Functional Dependency Functional dependency (FD) is a set of constraints between two attributes in a relation. Functional dependency says that if two tuples have the same values for attributes A1, A2,..., An, then those two tuples must have to have same values for attributes B1, B2, ..., Bn. Functional dependency is represented by an arrow sign (→) that is, X→Y, where X functionally determines Y. The left-hand side attributes determine the values of attributes on the right-hand side. Types of schema dependency Schema-bound and non-schema-bound dependencies. A Schema-bound dependencies are those dependencies that prevent the referenced object from being altered or dropped without first removing the dependency. An example of a schema-bound reference would be a view created on a table using the WITH SCHEMABINDING option. A Non-schema-bound dependency: does not prevent the referenced object from being altered or dropped. An example of this is a stored procedure that selects from a table. The table can be dropped without first dropping the stored procedure or removing the reference to the table from that stored procedure. Consider the following. Functional Dependency (FD) is a constraint that determines the relation of one attribute to another attribute. Functional dependency is denoted by an arrow “→”. The functional dependency of X on Y is represented by X → Y. In this example, if we know the value of the Employee number, we can obtain Employee Name, city, salary, etc. By this, we can say that the city, Employee Name, and salary are functionally dependent on the Employee number. Key Terms for Functional Dependency in Database Description Axiom Axioms are a set of inference rules used to infer all the functional dependencies on a relational database. Decomposition It is a rule that suggests if you have a table that appears to contain two entities that are determined by the same primary key then you should consider breaking them up into two different tables. Dependent It is displayed on the right side of the functional dependency diagram. Determinant It is displayed on the left side of the functional dependency Diagram. Union It suggests that if two tables are separate, and the PK is the same, you should consider putting them. Together Armstrong’s Axioms The inclusion rule is one rule of implication by which FDs can be generated that are guaranteed to hold for all possible tables. It turns out that from a small set of basic rules of implication, we can derive all others. We list here three basic rules that we call Armstrong’s Axioms Armstrong’s Axioms property was developed by William Armstrong in 1974 to reason about functional dependencies. The property suggests rules that hold true if the following are satisfied: 1. Transitivity If A->B and B->C, then A->C i.e. a transitive relation. 2. Reflexivity A-> B, if B is a subset of A.
Database Systems Handbook BY: MUHAMMAD SHARIF 140 3. Augmentation -> The last rule suggests: AC->BC, if A->B Inference Rule (IR) Armstrong's axioms are the basic inference rule. Armstrong's axioms are used to conclude functional dependencies on a relational database. The inference rule is a type of assertion. It can apply to a set of FD (functional dependency) to derive other FD. The Functional dependency has 6 types of inference rules: 1. Reflexive Rule (IR1) 2. Augmentation Rule (IR2) 3. Transitive Rule (IR3) 4. Union Rule (IR4) 5. Decomposition Rule (IR5) 6. Pseudo transitive Rule (IR6) Functional Dependency type Dependencies in DBMS are a relation between two or more attributes. It has the following types in DBMS Functional Dependency
Database Systems Handbook BY: MUHAMMAD SHARIF 141 If the information stored in a table can uniquely determine another information in the same table, then it is called Functional Dependency. Consider it as an association between two attributes of the same relation. Major type are Trivial, non-trival, complete functional, multivalued, transitive dependency Partial Dependency Partial Dependency occurs when a nonprime attribute is functionally dependent on part of a candidate key. Multivalued Dependency When the existence of one or more rows in a table implies one or more other rows in the same table, then the Multi-valued dependencies occur. Multivalued dependency occurs when two attributes in a table are independent of each other but, both depend on a third attribute. A multivalued dependency consists of at least two attributes that are dependent on a third attribute that's why it always requires at least three attributes. Join Dependency Join decomposition is a further generalization of Multivalued dependencies. If the join of R1 and R2 over C is equal to relation R, then we can say that a join dependency (JD) exists. Inclusion Dependency Multivalued dependency and join dependency can be used to guide database design although they both are less common than functional dependencies. The inclusion dependency is a statement in which some columns of a relation are contained in other columns. Transitive Dependency When an indirect relationship causes functional dependency it is called Transitive Dependency. Fully-functionally Dependency An attribute is fully functional dependent on another attribute if it is Functionally Dependent on that attribute and not on any of its proper subset Trivial functional dependency A → B has trivial functional dependency if B is a subset of A. The following dependencies are also trivial: A → A, B → B { DeptId, DeptName } -> Dept Id Non-trivial functional dependency A → B has a non-trivial functional dependency if B is not a subset of A. Trivial − If a functional dependency (FD) X → Y holds, where Y is a subset of X, then it is called a trivial FD. It occurs when B is not a subset of A in − A ->B DeptId -> DeptName Non-trivial − If an FD X → Y holds, where Y is not a subset of X, then it is called a non-trivial FD. Completely non-trivial − If an FD X → Y holds, where x intersects Y = Φ, it is said to be a completely non-trivial FD. When A intersection B is NULL, then A → B is called a complete non-trivial. A ->B Intersaction is empty. Multivalued Dependency and its types 1. Join Dependency 2. Join decomposition is a further generalization of Multivalued dependencies. 3. Inclusion Dependency Example of Dependency diagrams and flow
Database Systems Handbook BY: MUHAMMAD SHARIF 142 Dependency Preserving If a relation R is decomposed into relations R1 and R2, then the dependencies of R either must be a part of R1 or R2 or must be derivable from the combination of functional dependencies of R1 and R2. For example, suppose there is a relation R (A, B, C, D) with a functional dependency set (A->BC). The relational R is decomposed into R1(ABC) and R2(AD) which is dependency preserving because FD A->BC is a part of relation R1(ABC) Find the canonical cover? Solution: Given FD = { B → A, AD → BC, C → ABD }, now decompose the FD using decomposition rule( Armstrong Axiom ). B → A AD → B ( using decomposition inference rule on AD → BC) AD → C ( using decomposition inference rule on AD → BC) C → A ( using decomposition inference rule on C → ABD) C → B ( using decomposition inference rule on C → ABD) C → D ( using decomposition inference rule on C → ABD) Now set of FD = { B → A, AD → B, AD → C, C → A, C → B, C → D } Canonical Cover/ irreducible A canonical cover or irreducible set of functional dependencies FD is a simplified set of FD that has a similar closure as the original set FD. Extraneous attributes An attribute of an FD is said to be extraneous if we can remove it without changing the closure of the set of FD. Closure Of Functional Dependency The Closure Of Functional Dependency means the complete set of all possible attributes that can be functionally derived from given functional dependency using the inference rules known as Armstrong’s Rules. If “F” is a functional dependency then closure of functional dependency can be denoted using “{F}+”. There are three steps to calculate closure of functional dependency. These are: Step-1 : Add the attributes which are present on Left Hand Side in the original functional dependency. Step-2 : Now, add the attributes present on the Right Hand Side of the functional dependency. Step-3 : With the help of attributes present on Right Hand Side, check the other attributes that can be derived from the other given functional dependencies. Repeat this process until all the possible attributes which can be derived are added in the closure.
Database Systems Handbook BY: MUHAMMAD SHARIF 143
Database Systems Handbook BY: MUHAMMAD SHARIF 144
Database Systems Handbook BY: MUHAMMAD SHARIF 145
Database Systems Handbook BY: MUHAMMAD SHARIF 146
Database Systems Handbook BY: MUHAMMAD SHARIF 147 CHAPTER 8 DATABASE TRANSACTION, SCHEDULES, AND DEADLOCKS Overview: Transaction A Transaction is an atomic sequence of actions in the Database (reads and writes, commit, and abort) Each Transaction must be executed completely and must leave the Database in a consistent state. The transaction is a set of logically related operations. It contains a group of tasks. A transaction is an action or series of actions. It is performed by a single user to perform operations for accessing the contents of the database. A transaction can be defined as a group of tasks. A single task is the minimum processing unit which cannot be divided further. ACID Data concurrency means that many users can access data at the same time. Data consistency means that each user sees a consistent view of the data, including visible changes made by the user's transactions and transactions of other users. The ACID model provides a consistent system for Relational databases. The BASE model provides high availability for Non-relational databases like NoSQL MongoDB Techniques for achieving ACID properties Write-ahead logging and checkpointing Serializability and two-phase locking Some important points: Property Responsibility for maintaining Transactions: Atomicity Transaction Manager (Data remains atomic, executed completely, or should not be executed at all, the operation should not break in between or execute partially. Either all R(A) and W(A) are done or none is done) Consistency Application programmer / Application logic checks/ it related to rollbacks Isolation Concurrency Control Manager/Handle concurrency Durability Recovery Manager (Algorithms for Recovery and Isolation Exploiting Semantics (aries) Handle failures, Logging, and recovery (A, D) Concurrency control, rollback, application programmer (C, I) Consistency: The word consistency means that the value should remain preserved always, the database remains consistent before and after the transaction. Isolation and levels of isolation: The term 'isolation' means separation. Any changes that occur in any particular transaction will not be seen by other transactions until the change is not committed in the memory. A transaction isolation level is defined by the following phenomena: Concurrency Control Problems and isolation levels are the same The Three Bad Transaction Dependencies. Locks are often used to prevent these dependencies
Database Systems Handbook BY: MUHAMMAD SHARIF 148 The five concurrency problems that can occur in the database are: 1. Temporary Update Problem 2. Incorrect Summary Problem 3. Lost Update Problem 4. Unrepeatable Read Problem 5. Phantom Read Problem Dirty Read – A Dirty read is a situation when a transaction reads data that has not yet been committed. For example, Let’s say transaction 1 updates a row and leaves it uncommitted, meanwhile, Transaction 2 reads the updated row. If transaction 1 rolls back the change, transaction 2 will have read data that is considered never to have existed. (Dirty Read Problems (W-R Conflict)) Lost Updates occur when multiple transactions select the same row and update the row based on the value selected (Lost Update Problems (W - W Conflict)) Non Repeatable read – Non Repeatable read occurs when a transaction reads the same row twice and gets a different value each time. For example, suppose transaction T1 reads data. Due to concurrency, another transaction T2 updates the same data and commits, Now if transaction T1 rereads the same data, it will retrieve a different value. (Unrepeatable Read Problem (W-R Conflict)) Phantom Read – Phantom Read occurs when two same queries are executed, but the rows retrieved by the two, are different. For example, suppose transaction T1 retrieves a set of rows that satisfy some search criteria. Now, Transaction T2 generates some new rows that match the search criteria for transaction T1. If transaction T1 re- executes the statement that reads the rows, it gets a different set of rows this time. Based on these phenomena, the SQL standard defines four isolation levels : Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one transaction may read not yet committed changes made by another transaction, thereby allowing dirty reads. At this level, transactions are not isolated from each other. Read Committed – This isolation level guarantees that any data read is committed at the moment it is read. Thus it does not allows dirty reading. The transaction holds a read or write lock on the current row, and thus prevents other transactions from reading, updating, or deleting it. Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks on all rows it references and writes locks on all rows it inserts, updates, or deletes. Since other transactions cannot read, update or delete these rows, consequently it avoids non-repeatable read. Serializable – This is the highest isolation level. A serializable execution is guaranteed to be serializable. Serializable execution is defined to be an execution of operations in which concurrently executing transactions appear to be serially executing.
Database Systems Handbook BY: MUHAMMAD SHARIF 149 Durability: Durability ensures the permanency of something. In DBMS, the term durability ensures that the data after the successful execution of the operation becomes permanent in the database. If a transaction is committed, it will remain even error, power loss, etc. ACID Example:
Database Systems Handbook BY: MUHAMMAD SHARIF 150 States of Transaction Begin, active, partially committed, failed, committed, end, aborted Aborted details are necessary If any of the checks fail and the transaction has reached a failed state then the database recovery system will make sure that the database is in its previous consistent state. If not then it will abort or roll back the transaction to bring the database into a consistent state. If the transaction fails in the middle of the transaction then before executing the transaction, all the executed transactions are rolled back to their consistent state. After aborting the transaction, the database recovery module will select one of the two operations: 1) Re-start the transaction 2) Kill the transaction
Database Systems Handbook BY: MUHAMMAD SHARIF 151 The concurrency control protocols ensure the atomicity, consistency, isolation, durability and serializability of the concurrent execution of the database transactions. Therefore, these protocols are categorized as: 1. Lock Based Concurrency Control Protocol 2. Time Stamp Concurrency Control Protocol 3. Validation Based Concurrency Control Protocol The scheduler A module that schedules the transaction’s actions, ensuring serializability Two main approaches 1. Pessimistic: locks 2. Optimistic: time stamps, MV, validation Scheduling A schedule is responsible for maintaining jobs/transactions if many jobs are entered at the same time(by multiple users) to execute state and read/write operations performed at that jobs. A schedule is a sequence of interleaved actions from all transactions. Execution of several Facts while preserving the order of R(A) and W(A) of any 1 Xact. Note: Two schedules are equivalent if: Two Schedules are equivalent if they have the same dependencies. They contain the same transactions and operations They order all conflicting operations of non-aborting transactions in the same way A schedule is serializable if it is equivalent to a serial schedule Process Scheduling handles the selection of a process for the processor on the basis of a scheduling algorithm and also the removal of a process from the processor. It is an important part of multiprogramming in operating system. Process scheduling involves short-term scheduling, medium-term scheduling and long-term scheduling.
Database Systems Handbook BY: MUHAMMAD SHARIF 152 The major differences between long term, medium term and short term scheduler are as follows – Long term scheduler Medium term scheduler Short term scheduler Long term scheduler is a job scheduler. Medium term is a process of swapping schedulers. Short term scheduler is called a CPU scheduler.
Database Systems Handbook BY: MUHAMMAD SHARIF 153 Serial Schedule The serial schedule is a type of schedule where one transaction is executed completely before starting another transaction. Example of Serial Schedule The speed of long term is lesser than the short term. The speed of medium term is in between short and long term scheduler. The speed of short term is fastest among the other two. Long term controls the degree of multiprogramming. Medium term reduces the degree of multiprogramming. The short term provides lesser control over the degree of multiprogramming. The long term is almost nil or minimal in the time sharing system. The medium term is a part of the time sharing system. Short term is also a minimal time sharing system. The long term selects the processes from the pool and loads them into memory for execution. Medium term can reintroduce the process into memory and execution can be continued. Short term selects those processes that are ready to execute.
Database Systems Handbook BY: MUHAMMAD SHARIF 154 Non-Serial Schedule and its types: If interleaving of operations is allowed, then there will be a non-serial schedule. Serializable schedule Serializability is a guarantee about transactions over one or more objects Doesn’t impose real-time constraints The schedule is serializable if the precedence graph is acyclic The serializability of schedules is used to find non-serial schedules that allow the transaction to execute concurrently without interfering with one another. Example of Serializable A serializable schedule always leaves the database in a consistent state. A serial schedule is always a serializable schedule because, in a serial schedule, a transaction only starts when the other transaction finished execution. However, a non-serial schedule needs to be checked for Serializability. A non-serial schedule of n number of transactions is said to be a serializable schedule if it is equivalent to the serial schedule of those n transactions. A serial schedule doesn’t allow concurrency, only one transaction executes at a time, and the other stars when the already running transaction is finished.
Database Systems Handbook BY: MUHAMMAD SHARIF 155 Linearizability: a guarantee about single operations on single objects Once the write completes, all later reads (by wall clock) should reflect that write. Types of Serializability There are two types of Serializability. 1. Conflict Serializability 2. View Serializability Conflict Serializable A schedule is conflict serializable if it is equivalent to some serial schedule Non-conflicting operations can be reordered to get a serial schedule. In general, a schedule is conflict-serializable if and only if its precedence graph is acyclic A precedence graph is used for Testing for Conflict-Serializability View serializability/view equivalence is a concept that is used to compute whether schedules are View- Serializable or not. A schedule is said to be View-Serializable if it is view equivalent to a Serial Schedule (where no interleaving of transactions is possible).
Database Systems Handbook BY: MUHAMMAD SHARIF 156 Note: A schedule is view serializable if it is view equivalent to a serial schedule If a schedule is conflict serializable, then it is also viewed as serializable but not vice versa Non Serializable Schedule
Database Systems Handbook BY: MUHAMMAD SHARIF 157 The non-serializable schedule is divided into two types, Recoverable and Non-recoverable Schedules. 1. Recoverable Schedule(Cascading Schedule, cascades Schedule, strict Schedule). In a recoverable schedule, if a transaction T commits, then any other transaction that T read from must also have committed. A schedule is recoverable if: It is conflict-serializable, and Whenever a transaction T commits, all transactions that have written elements read by T have already been committed. Example of Recoverable Schedule 2. Non-Recoverable Schedule The relation between various types of schedules can be depicted as:
Database Systems Handbook BY: MUHAMMAD SHARIF 158 It can be seen that: 1. Cascadeless schedules are stricter than recoverable schedules or are a subset of recoverable schedules. 2. Strict schedules are stricter than cascade-less schedules or are a subset of cascade-less schedules. 3. Serial schedules satisfy constraints of all recoverable, cascadeless, and strict schedules and hence is a subset of strict schedules. Note: Linearizability + serializability = strict serializability Transaction behavior equivalent to some serial execution And that serial execution agrees with real-time Serializability Theorems Wormhole Theorem: A history is isolated if, and only if, it has no wormhole transactions. Locking Theorem: If all transactions are well-formed and two-phase, then any legal history will be isolated. Locking Theorem (converse): If a transaction is not well-formed or is not two-phase, then it is possible to write another transaction, such that the resulting pair is a wormhole. Rollback Theorem: An update transaction that does an UNLOCK and then a ROLLBACK is not two-phase. Thomas Write Rule provides the guarantee of serializability order for the protocol. It improves the Basic Timestamp Ordering Algorithm. The basic Thomas writing rules are as follows: If TS(T) < R_TS(X) then transaction T is aborted and rolled back, and the operation is rejected. If TS(T) < W_TS(X) then don't execute the W_item(X) operation of the transaction and continue processing. Different Types of reading Write Conflict in DBMS As I mentioned earlier, the read operation is safe as it does modify any information. So, there is no Read-Read (RR) conflict in the database. So, there are three types of conflict in the database transaction. Problem 1: Reading Uncommitted Data (WR Conflicts) Reading the value of an uncommitted object might yield an inconsistency Dirty Reads or Write-then-Read (WR) Conflicts. Problem 2: Unrepeatable Reads (RW Conflicts) Reading the same object twice might yield an inconsistency Read-then-Write (RW) Conflicts (Write-After-Read) Problem 3: Overwriting Uncommitted Data (WW Conflicts) Overwriting an uncommitted object might yield an inconsistency What is Write-Read (WR) conflict? This conflict occurs when a transaction read the data which is written by the other transaction before committing. What is Read-Write (RW) conflict? Transaction T2 is Writing data that is previously read by transaction T1.
Database Systems Handbook BY: MUHAMMAD SHARIF 159 Here if you look at the diagram above, data read by transaction T1 before and after T2 commits is different. What is Write-Write (WW) conflict? Here Transaction T2 is writing data that is already written by other transaction T1. T2 overwrites the data written by T1. It is also called a blind write operation. Data written by T1 has vanished. So it is data update loss. Phase Commit (PC) One-phase commit The Single Phase Commit protocol is more efficient at run time because all updates are done without any explicit coordination. BEGIN INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY) VALUES (1, 'Ramesh', 32, 'Ahmedabad', 2000.00 ); INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY) VALUES (2, 'Khilan', 25, 'Delhi', 1500.00 ); COMMIT; Two-Phase Commit (2PC) The most commonly used atomic commit protocol is a two-phase commit. You may notice that is very similar to the protocol that we used for total order multicast. Whereas the multicast protocol used a two-phase approach to allow the coordinator to select a commit time based on information from the participants, a two-phase commit lets the coordinator select whether or not a transaction will be committed or aborted based on information from the participants. Three-phase Commit Another real-world atomic commit protocol is a three-phase commit (3PC). This protocol can reduce the amount of blocking and provide for more flexible recovery in the event of failure. Although it is a better choice in unusually failure-prone environments, its complexity makes 2PC the more popular choice. Transaction atomicity using a two-phase commit Transaction serializability using distributed locking. DBMS Deadlock Types or techniques
Database Systems Handbook BY: MUHAMMAD SHARIF 160 All lock requests are made to the concurrency-control manager. Transactions proceed only once the lock request is granted. A lock is a variable, associated with the data item, which controls the access of that data item. Locking is the most widely used form of concurrency control. Deadlock Example:
Database Systems Handbook BY: MUHAMMAD SHARIF 161 Lock modes and types
Database Systems Handbook BY: MUHAMMAD SHARIF 162 1. Binary Locks: A Binary lock on a data item can either be locked or unlocked states. 2. Shared/exclusive: This type of locking mechanism separates the locks in DBMS based on their uses. If a lock is acquired on a data item to perform a write operation, it is called an exclusive lock. 3. Simplistic Lock Protocol: This type of lock-based protocol allows transactions to obtain a lock on every object before beginning operation. Transactions may unlock the data item after finishing the ‘write’ operation. 4. Pre-claiming Locking: Two-Phase locking protocol which is also known as a 2PL protocol needs a transaction should acquire a lock after it releases one of its locks. It has 2 phases growing and shrinking. 5. Shared lock: These locks are referred to as read locks, and denoted by 'S'. If a transaction T has obtained Shared-lock on data item X, then T can read X, but cannot write X. Multiple Shared locks can be placed simultaneously on a data item. A deadlock is an unwanted situation in which two or more transactions are waiting indefinitely for one another to give up locks. Four necessary conditions for deadlock Mutual exclusion -- only one process at a time can use the resource
Database Systems Handbook BY: MUHAMMAD SHARIF 163 Hold and wait -- there must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes. No preemption -- resources cannot be preempted; a resource can be released only voluntarily by the process holding it. Circular wait – one waits for others, others wait for one. The Bakery algorithm is one of the simplest known solutions to the mutual exclusion problem for the general case of the N process. The bakery Algorithm is a critical section solution for N processes. The algorithm preserves the first come first serve the property. Before entering its critical section, the process receives a number. The holder of the smallest number enters the critical section. Deadlock detection This technique allows deadlock to occur, but then, it detects it and solves it. Here, a database is periodically checked for deadlocks. If a deadlock is detected, one of the transactions, involved in the deadlock cycle, is aborted. Other transactions continue their execution. An aborted transaction is rolled back and restarted. When a transaction waits more than a specific amount of time to obtain a lock (called the deadlock timeout), Derby can detect whether the transaction is involved in a deadlock. If deadlocks occur frequently in your multi-user system with a particular application, you might need to do some debugging. A deadlock where two transactions are waiting for one another to give up locks. Deadlock detection and removal schemes Wait-for-graph This scheme allows the older transaction to wait but kills the younger one.
Database Systems Handbook BY: MUHAMMAD SHARIF 164 Phantom deadlock detection is the condition where the deadlock does not exist but due to a delay in propagating local information, deadlock detection algorithms identify the locks that have been already acquired. There are three alternatives for deadlock detection in a distributed system, namely. Centralized Deadlock Detector − One site is designated as the central deadlock detector. Hierarchical Deadlock Detector − Some deadlock detectors are arranged in a hierarchy. Distributed Deadlock Detector − All the sites participate in detecting deadlocks and removing them. The deadlock detection algorithm uses 3 data structures – Available Vector of length m Indicates the number of available resources of each type. Allocation Matrix of size n*m A[i,j] indicates the number of j the resource type allocated to I the process. Request Matrix of size n*m Indicates the request of each process. Request[i,j] tells the number of instances Pi process is the request of jth resource type. Deadlock Avoidance Deadlock avoidance Acquire locks in a pre-defined order Acquire all locks at once before starting transactions Aborting a transaction is not always a practical approach. Instead, deadlock avoidance mechanisms can be used to detect any deadlock situation in advance. The deadlock prevention technique avoids the conditions that lead to deadlocking. It requires that every transaction lock all data items it needs in advance. If any of the items cannot be obtained, none of the items are locked. The transaction is then rescheduled for execution. The deadlock prevention technique is used in two-phase locking. To prevent any deadlock situation in the system, the DBMS aggressively inspects all the operations, where transactions are about to execute. If it finds that a deadlock situation might occur, then that transaction is never allowed to be executed. Deadlock Prevention Algo 1. Wait-Die scheme 2. Wound wait scheme
Database Systems Handbook BY: MUHAMMAD SHARIF 165 Note! Deadlock prevention is more strict than Deadlock Avoidance. The algorithms are as follows − Wait-Die − If T1 is older than T2, T1 is allowed to wait. Otherwise, if T1 is younger than T2, T1 is aborted and later restarted. Wait-die: permit older waits for younger Wound-Wait − If T1 is older than T2, T2 is aborted and later restarted. Otherwise, if T1 is younger than T2, T1 is allowed to wait. Wound-wait: permit younger waits for older. Note: In a bulky system, deadlock prevention techniques may work well. Here, we want to develop an algorithm to avoid deadlock by making the right choice all the time Dijkstra's Banker's Algorithm is an approach to trying to give processes as much as possible while guaranteeing no deadlock. safe state -- a state is safe if the system can allocate resources to each process in some order and still avoid a deadlock. Banker's Algorithm for Single Resource Type is a resource allocation and deadlock avoidance algorithm. This name has been given since it is one of most problems in Banking Systems these days. In this, as a new process P1 enters, it declares the maximum number of resources it needs. The system looks at those and checks if allocating those resources to P1 will leave the system in a safe state or not. If after allocation, it will be in a safe state, the resources are allocated to process P1. Otherwise, P1 should wait till the other processes release some resources. This is the basic idea of Banker’s Algorithm. A state is safe if the system can allocate all resources requested by all processes ( up to their stated maximums ) without entering a deadlock state. Resource Preemption: To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those resources to other processes. This method will raise three issues – (a) Selecting a victim: We must determine which resources and which processes are to be preempted and also order to minimize the cost. (b) Rollback: We must determine what should be done with the process from which resources are preempted. One simple idea is total rollback. That means aborting the process and restarting it. (c) Starvation: In a system, the same process may be always picked as a victim. As a result, that process will never complete its designated task. This situation is called Starvation and must be avoided. One solution is that a process must be picked as a victim only a finite number of times.
Database Systems Handbook BY: MUHAMMAD SHARIF 166 Concurrent vs non-concurrent data access Concurrent executions are done for Better transaction throughput, response time Done via better utilization of resources What is Concurrency Control? Concurrent access is quite easy if all users are just reading data. There is no way they can interfere with one another. Though for any practical Database, it would have a mix of READ and WRITE operations, and hence the concurrency is a challenge. DBMS Concurrency Control is used to address such conflicts, which mostly occur with a multi-user system.
Database Systems Handbook BY: MUHAMMAD SHARIF 167 Various concurrency control techniques/Methods are: 1. Two-phase locking Protocol 2. Time stamp ordering Protocol 3. Multi-version concurrency control 4. Validation concurrency control Two Phase Locking Protocol is also known as 2PL protocol is a method of concurrency control in DBMS that ensures serializability by applying a lock to the transaction data which blocks other transactions to access the same data simultaneously. Two Phase Locking protocol helps to eliminate the concurrency problem in DBMS. Every 2PL schedule is serializable. Theorem: 2PL ensures/enforce conflict serializability schedule But does not enforce recoverable schedules 2PL rule: Once a transaction has released a lock it is not allowed to obtain any other locks This locking protocol divides the execution phase of a transaction into three different parts. In the first phase, when the transaction begins to execute, it requires permission for the locks it needs. The second part is where the transaction obtains all the locks. When a transaction releases its first lock, the third phase starts. In this third phase, the transaction cannot demand any new locks. Instead, it only releases the acquired locks.
Database Systems Handbook BY: MUHAMMAD SHARIF 168 The Two-Phase Locking protocol allows each transaction to make a lock or unlock request Growing Phase and Shrinking Phase. 2PL has the following two phases: A growing phase, in which a transaction acquires all the required locks without unlocking any data. Once all locks have been acquired, the transaction is in its locked point. A shrinking phase, in which a transaction releases all locks and cannot obtain any new lock. In practice: – Growing phase is the entire transaction – Shrinking phase is during the commit
Database Systems Handbook BY: MUHAMMAD SHARIF 169 The 2PL protocol indeed offers serializability. However, it does not ensure that deadlocks do not happen. In the above-given diagram, you can see that local and global deadlock detectors are searching for deadlocks and solving them by resuming transactions to their initial states. Strict Two-Phase Locking Method Strict-Two phase locking system is almost like 2PL. The only difference is that Strict-2PL never releases a lock after using it. It holds all the locks until the commit point and releases all the locks at one go when the process is over. Strict 2PL: All locks held by a transaction are released when the transaction is completed. Strict 2PL guarantees conflict serializability, but not serializability. Centralized 2PL In Centralized 2PL, a single site is responsible for the lock management process. It has only one lock manager for the entire DBMS. Primary copy 2PL Primary copy 2PL mechanism, many lock managers are distributed to different sites. After that, a particular lock manager is responsible for managing the lock for a set of data items. When the primary copy has been updated, the change is propagated to the slaves. Distributed 2PL In this kind of two-phase locking mechanism, Lock managers are distributed to all sites. They are responsible for managing locks for data at that site. If no data is replicated, it is equivalent to primary copy 2PL. Communication costs of Distributed 2PL are quite higher than primary copy 2PL Time-Stamp Methods for Concurrency control: The timestamp is a unique identifier created by the DBMS to identify the relative starting time of a transaction. Typically, timestamp values are assigned in the order in which the transactions are submitted to the system. So, a timestamp can be thought of as the transaction start time. Therefore, time stamping is a method of concurrency control in which each transaction is assigned a transaction timestamp. Timestamps must have two properties namely Uniqueness: The uniqueness property assures that no equal timestamp values can exist. Monotonicity: monotonicity assures that timestamp values always increase. Timestamps are divided into further fields: Granule Timestamps Timestamp Ordering Conflict Resolution in Timestamps Timestamp-based Protocol in DBMS is an algorithm that uses the System Time or Logical Counter as a timestamp to serialize the execution of concurrent transactions. The Timestamp-based protocol ensures that every conflicting read and write operation is executed in timestamp order. The timestamp-based algorithm uses a timestamp to serialize the execution of concurrent transactions. The protocol uses the System Time or Logical Count as a Timestamp. Conflict Resolution in Timestamps: To deal with conflicts in timestamp algorithms, some transactions involved in conflicts are made to wait and abort others. Following are the main strategies of conflict resolution in timestamps: Wait-die: The older transaction waits for the younger if the younger has accessed the granule first. The younger transaction is aborted (dies) and restarted if it tries to access a granule after an older concurrent transaction. Wound-wait: The older transaction pre-empts the younger by suspending (wounding) it if the younger transaction tries to access a granule after an older concurrent transaction.
Database Systems Handbook BY: MUHAMMAD SHARIF 170 An older transaction will wait for a younger one to commit if the younger has accessed a granule that both want. Timestamp Ordering: Following are the three basic variants of timestamp-based methods of concurrency control: 1. Total timestamp ordering 2. Partial timestamp ordering Multiversion timestamp ordering Multi-version concurrency control Multiversion Concurrency Control (MVCC) enables snapshot isolation. Snapshot isolation means that whenever a transaction would take a read lock on a page, it makes a copy of the page instead, and then performs its operations on that copied page. This frees other writers from blocking due to read lock held by other transactions. Maintain multiple versions of objects, each with its timestamp. Allocate the correct version to reads. Multiversion schemes keep old versions of data items to increase concurrency. The main difference between MVCC and standard locking: read locks do not conflict with write locks ⇒ reading never blocks writing, writing blocks reading Advantage of MVCC locking needed for serializability considerably reduced Disadvantages of MVCC visibility-check overhead (on every tuple read/write) Validation-Based Protocols Validation-based Protocol in DBMS also known as Optimistic Concurrency Control Technique is a method to avoid concurrency in transactions. In this protocol, the local copies of the transaction data are updated rather than the data itself, which results in less interference while the execution of the transaction. Optimistic Methods of Concurrency Control: The optimistic method of concurrency control is based on the assumption that conflicts in database operations are rare and that it is better to let transactions run to completion and only check for conflicts before they commit. The Validation based Protocol is performed in the following three phases: Read Phase Validation Phase Write Phase Read Phase In the Read Phase, the data values from the database can be read by a transaction but the write operation or updates are only applied to the local data copies, not the actual database. Validation Phase In the Validation Phase, the data is checked to ensure that there is no violation of serializability while applying the transaction updates to the database. Write Phase In the Write Phase, the updates are applied to the database if the validation is successful, else; the updates are not applied, and the transaction is rolled back. Laws of concurrency control 1. First Law of Concurrency Control Concurrent execution should not cause application programs to malfunction. 2. Second Law of Concurrency Control Concurrent execution should not have lower throughput or much higher response times than serial execution. Lock Thrashing is the point where system performance(throughput) decreases with increasing load (adding more active transactions). It happens due to the contention of locks. Transactions waste time on lock waits.
Database Systems Handbook BY: MUHAMMAD SHARIF 171 The default concurrency control mechanism depends on the table type Disk-based tables (D-tables) are by default optimistic. Main-memory tables (M-tables) are always pessimistic. Pessimistic locking (Locking and timestamp) is useful if there are a lot of updates and relatively high chances of users trying to update data at the same time. Optimistic (Validation) locking is useful if the possibility for conflicts is very low – there are many records but relatively few users, or very few updates and mostly read-type operations. Optimistic concurrency control is based on the idea of conflicts and transaction restart while pessimistic concurrency control uses locking as the basic serialization mechanism (it assumes that two or more users will want to update the same record at the same time, and then prevents that possibility by locking the record, no matter how unlikely conflicts are. Properties Optimistic locking is useful in stateless environments (such as mod_plsql and the like). Not only useful but critical. optimistic locking -- you read data out and only update it if it did not change. Optimistic locking only works when developers modify the same object. The problem occurs when multiple developers are modifying different objects on the same page at the same time. Modifying one object may affect the process of the entire page, which other developers may not be aware of. pessimistic locking -- you lock the data as you read it out AND THEN modify it. Lock Granularity: A database is represented as a collection of named data items. The size of the data item chosen as the unit of protection by a concurrency control program is called granularity. Locking can take place at the following level : Database level. Table level(Coarse-grain locking). Page level. Row (Tuple) level. Attributes (fields) level. Multiple Granularity Let's start by understanding the meaning of granularity. Granularity: It is the size of the data item allowed to lock. It can be defined as hierarchically breaking up the database into blocks that can be locked. The Multiple Granularity protocol enhances concurrency and reduces lock overhead. It maintains the track of what to lock and how to lock. It makes it easy to decide either to lock a data item or to unlock a data item. This type of hierarchy can be graphically represented as a tree. There are three additional lock modes with multiple granularities: Intention-shared (IS): It contains explicit locking at a lower level of the tree but only with shared locks. Intention-Exclusive (IX): It contains explicit locking at a lower level with exclusive or shared locks. Shared & Intention-Exclusive (SIX): In this lock, the node is locked in shared mode, and some node is locked in exclusive mode by the same transaction. Compatibility Matrix with Intention Lock Modes: The below table describes the compatibility matrix for these lock modes:
Database Systems Handbook BY: MUHAMMAD SHARIF 172
Database Systems Handbook BY: MUHAMMAD SHARIF 173 The phantom problem A database is a collection of static elements like tuples. If tuples are inserted/deleted then the phantom problem appears A “phantom” is a tuple that is invisible during part of a transaction execution but not invisible during the entire execution Even if they lock individual data items, could result in non-serializable execution
Database Systems Handbook BY: MUHAMMAD SHARIF 174 In our example: – T1: reads the list of products – T2: inserts a new product – T1: re-reads: a new product appears! Dealing With Phantoms Lock the entire table, or Lock the index entry for ‘blue’ – If the index is available Or use predicate locks – A lock on an arbitrary predicate Dealing with phantoms is expensive END
Database Systems Handbook BY: MUHAMMAD SHARIF 175 CHAPTER 9 RELATIONAL ALGEBRA AND QUERY PROCESSING Relational algebra is a procedural query language. It gives a step-by-step process to obtain the result of the query. It uses operators to perform queries. What is an “Algebra” Answer: Set of operands and operations that are “closed” under all compositions What is the basis of Query Languages? Answer: Two formal Query Languages form the basis of “real” query languages (e.g., SQL) are: 1) Relational Algebra: Operational, it provides a recipe for evaluating the query. Useful for representing execution plans. A language based on operators and a domain of values. The operator's map values are taken from the domain into other domain values. Domain: The set of relations/tables. 2) Relational Calculus: Let users describe what they want, rather than how to compute it. (Nonoperational, Non- Procedural, declarative.) SQL is an abstraction of relational algebra. It makes using it much easier than writing a bunch of math. Effectively, the parts of SQL that directly relate to relational algebra are: SQL -> Relational Algebra Select columns -> Projection Select row -> Selection (Where Clause) INNER JOIN -> Set Union OUTER JOIN -> Set Difference JOIN -> Cartesian Product (when you screw up your join statement) Details Explanation of Relational Operators are the following: Operation (Symbols) Purpose Select(σ) The SELECT operation is used for selecting a subset of the tuples according to a given selection condition (Unary operator) Projection(π) The projection eliminates all attributes of the input relation but those mentioned in the projection list. (Unary operator)/ Projection operator has to eliminate duplicates!
Database Systems Handbook BY: MUHAMMAD SHARIF 176 Union Operation(∪) UNION is symbolized by the symbol. It includes all tuples that are in tables A or B. Set Difference(-) - Symbol denotes it. The result of A - B, is a relation that includes all tuples that are in A but not in B. Intersection(∩) Intersection defines a relation consisting of a set of all tuples that are in both A and B. Cartesian Product(X) Cartesian operation is helpful to merge columns from two relations. Inner Join Inner join includes only those tuples that satisfy the matching criteria. Theta Join(θ) The general case of the JOIN operation is called a Theta join. It is denoted by the symbol θ. EQUI Join When a theta join uses only an equivalence condition, it becomes an equi join. Natural Join(⋈) Natural join can only be performed if there is a common attribute (column) between the relations. Outer Join In an outer join, along with tuples that satisfy the matching criteria. Left Outer Join( ) In the left outer join, the operation allows keeping all tuples in the left relation. Right Outer join( ) In the right outer join, the operation allows keeping all tuples in the right relation. Full Outer Join( ) In a full outer join, all tuples from both relations are included in the result irrespective of the matching condition.
Database Systems Handbook BY: MUHAMMAD SHARIF 177
Database Systems Handbook BY: MUHAMMAD SHARIF 178 Select Operation Notation: ⴋp(r) p is called the selection predicate Project Operation Notation: πA1,..., Ak (r) The result is defined as the relation of k columns obtained by deleting the columns that are not listed
Database Systems Handbook BY: MUHAMMAD SHARIF 179 Condition join/theta join
Database Systems Handbook BY: MUHAMMAD SHARIF 180 Union Operation Notation: r Us
Database Systems Handbook BY: MUHAMMAD SHARIF 181 What is the composition of operators/operations? In general, since the result of a relational-algebra operation is of the same type (relation) as its inputs, relational- algebra operations can be composed together into a relational-algebra expression. Composing relational-algebra operations into relational-algebra expressions is just like composing arithmetic operations (such as −, ∗, and ÷) into arithmetic expressions.
Database Systems Handbook BY: MUHAMMAD SHARIF 182
Database Systems Handbook BY: MUHAMMAD SHARIF 183
Database Systems Handbook BY: MUHAMMAD SHARIF 184 Examples of Relational Algebra
Database Systems Handbook BY: MUHAMMAD SHARIF 185
Database Systems Handbook BY: MUHAMMAD SHARIF 186
Database Systems Handbook BY: MUHAMMAD SHARIF 187
Database Systems Handbook BY: MUHAMMAD SHARIF 188
Database Systems Handbook BY: MUHAMMAD SHARIF 189
Database Systems Handbook BY: MUHAMMAD SHARIF 190
Database Systems Handbook BY: MUHAMMAD SHARIF 191
Database Systems Handbook BY: MUHAMMAD SHARIF 192 Relational Calculus There is an alternate way of formulating queries known as Relational Calculus. Relational calculus is a non-procedural query language. In the non-procedural query language, the user is concerned with the details of how to obtain the results. The relational calculus tells what to do but never explains how to do it. Most commercial relational languages are based on aspects of relational calculus including SQL-QBE and QUEL. It is based on Predicate calculus, a name derived from a branch of symbolic language. A predicate is a truth-valued function with arguments.
Database Systems Handbook BY: MUHAMMAD SHARIF 193 Notations of RC Types of Relational calculus: TRC: Variables range over (i.e., get bound to) tuples. DRC: Variables range over domain elements (= field values Tuple Relational Calculus (TRC) TRC (tuple relation calculus) can be quantified. In TRC, we can use Existential (∃) and Universal Quantifiers (∀) Domain Relational Calculus (DRC) Domain relational calculus uses the same operators as tuple calculus. It uses logical connectives ∧ (and), ∨ (or), and ┓ (not). It uses Existential (∃) and Universal Quantifiers (∀) to bind the variable. The QBE or Query by example is a query language related to domain relational calculus. Differences in RA and RC Sr. No. Key Relational Algebra Relational Calculus 1 Language Type Relational Algebra is a procedural query language. Relational Calculus is a non-procedural or declarative query language. 2 Objective Relational Algebra targets how to obtain the result. Relational Calculus targets what result to obtain. 3 Order Relational Algebra specifies the order in which operations are to be performed. Relational Calculus specifies no such order of executions for its operations. 4 Dependency Relational Algebra is domain-independent. Relational Calculus can be domain dependent. 5 Programming Language Relational Algebra is close to programming language concepts. Relational Calculus is not related to programming language concepts.
Database Systems Handbook BY: MUHAMMAD SHARIF 194 Differences in TRC and DRC Tuple Relational Calculus (TRC) Domain Relational Calculus (DRC) In TRS, the variables represent the tuples from specified relations. In DRS, the variables represent the value drawn from the specified domain. A tuple is a single element of relation. In database terms, it is a row. A domain is equivalent to column data type and any constraints on the value of data. This filtering variable uses a tuple of the relation. This filtering is done based on the domain of attributes. A query cannot be expressed using a membership condition. A query can be expressed using a membership condition. The QUEL or Query Language is a query language related to it, The QBE or Query-By-Example is query language related to it. It reflects traditional pre-relational file structures. It is more similar to logic as a modeling language. Notation : {T | P (T)} or {T | Condition (T)} Notation : { a1, a2, a3, …, an | P (a1, a2, a3, …, an)} Example : {T | EMPLOYEE (T) AND T.DEPT_ID = 10} Example : { | < EMPLOYEE > DEPT_ID = 10 } Examples of RC:
Database Systems Handbook BY: MUHAMMAD SHARIF 195 Query Block in RA
Database Systems Handbook BY: MUHAMMAD SHARIF 196 Query tree plan
Database Systems Handbook BY: MUHAMMAD SHARIF 197 SQL, Relational Algebra, Tuple Calculus, and domain calculus examples: Comparisons Select Operation R = (A, B) Relational Algebra: σB=17 (r) Tuple Calculus: {t | t ∈ r ∧ B = 17} Domain Calculus: {<a, b> | <a, b> ∈ r ∧ b = 17} Project Operation
Database Systems Handbook BY: MUHAMMAD SHARIF 198 R = (A, B) Relational Algebra: ΠA(r) Tuple Calculus: {t | ∃ p ∈ r (t[A] = p[A])} Domain Calculus: {<a> | ∃ b ( <a, b> ∈ r )} Combining Operations R = (A, B) Relational Algebra: ΠA(σB=17 (r)) Tuple Calculus: {t | ∃ p ∈ r (t[A] = p[A] ∧ p[B] = 17)} Domain Calculus: {<a> | ∃ b ( <a, b> ∈ r ∧ b = 17)} Natural Join R = (A, B, C, D) S = (B, D, E) Relational Algebra: r ⋈ s Πr.A,r.B,r.C,r.D,s.E(σr.B=s.B ∧ r.D=s.D (r × s)) Tuple Calculus: {t | ∃ p ∈ r ∃ q ∈ s (t[A] = p[A] ∧ t[B] = p[B] ∧ t[C] = p[C] ∧ t[D] = p[D] ∧ t[E] = q[E] ∧ p[B] = q[B] ∧ p[D] = q[D])} Domain Calculus: {<a, b, c, d, e> | <a, b, c, d> ∈ r ∧ <b, d, e> ∈ s}
Database Systems Handbook BY: MUHAMMAD SHARIF 199
Database Systems Handbook BY: MUHAMMAD SHARIF 200 Query Processing in DBMS Query Processing is the activity performed in extracting data from the database. In query processing, it takes various steps for fetching the data from the database. The steps involved are: Parsing and translation Optimization Evaluation The query processing works in the following way: Parsing and Translation As query processing includes certain activities for data retrieval. select emp_name from Employee where salary>10000;
Database Systems Handbook BY: MUHAMMAD SHARIF 201 Thus, to make the system understand the user query, it needs to be translated in the form of relational algebra. We can bring this query in the relational algebra form as: σsalary>10000 (πsalary (Employee)) πsalary (σsalary>10000 (Employee)) After translating the given query, we can execute each relational algebra operation by using different algorithms. So, in this way, query processing begins its working. Query processor Query processor assists in the execution of database queries such as retrieval, insertion, update, or removal of data Key components: Data Manipulation Language (DML) compiler Query parser Query rewriter Query optimizer Query executor Query Processing Workflow Right from the moment the query is written and submitted by the user, to the point of its execution and the eventual return of the results, there are several steps involved. These steps are outlined below in the following diagram.
Database Systems Handbook BY: MUHAMMAD SHARIF 202 What Does Parsing a Query Mean? The parsing of a query is performed within the database using the Optimizer component. Taking all of these inputs into consideration, the Optimizer decides the best possible way to execute the query. This information is stored within the SGA in the Library Cache – a sub-pool within the Shared Pool. The memory area within the Library Cache in which the information about a query’s processing is kept is called the Cursor. Thus, if a reusable cursor is found within the library cache, it’s just a matter of picking it up and using it to execute the statement. This is called Soft Parsing. If it’s not possible to find a reusable cursor or if the query has never been executed before, query optimization is required. This is called Hard Parsing. Network model with query processing
Database Systems Handbook BY: MUHAMMAD SHARIF 203
Database Systems Handbook BY: MUHAMMAD SHARIF 204 Understanding Hard Parsing Hard parsing means that either the cursor was not found in the library cache or it was found but was invalidated for some reason. For whatever reason, Hard Parsing would mean that work needs to be done by the optimizer to ensure the most optimal execution plan for the query. Before the process of finding the best plan is started for the query, some tasks are completed. These tasks are repeatedly executed even if the same query executes in the same session for N number of times: 1. Syntax Check 2. Semantics Check 3. Hashing the query text and generating a hash key-value pair
Database Systems Handbook BY: MUHAMMAD SHARIF 205 Various phases of query executation in system. First query go from client process to server process and in PGA SQL area then following phases start: 1 Parsing (Parse query tree, (syntax check, semantic check, shared pool check) used for soft parse 2 Transformation (Binding) 3 Estimation/query optimization 4 Plan generation, row source generation 5 Query Execution & plan 6 Query result Index and Table scan in the query execution process
Database Systems Handbook BY: MUHAMMAD SHARIF 206 Query Evaluation
Database Systems Handbook BY: MUHAMMAD SHARIF 207 Query Evaluation Techniques for Large Databases The logic applied to the evaluation of SELECT statements, as described here, does not precisely reflect how the DBMS Server evaluates your query to determine the most efficient way to return results. However, by applying this logic to your queries and data, the results of your queries can be anticipated. 1. Evaluate the FROM clause. Combine all the sources specified in the FROM clause to create a Cartesian product (a table composed of all the rows and columns of the sources). If joins are specified, evaluate each join to obtain its results table, and combine it with the other sources in the FROM clause. If SELECT DISTINCT is specified, discard duplicate rows. 2. Apply the WHERE clause. Discard rows in the result table that do not fulfill the restrictions specified in the WHERE clause. 3. Apply the GROUP BY clause. Group results according to the columns specified in the GROUP BY clause. 4. Apply the HAVING clause. Discard rows in the result table that do not fulfill the restrictions specified in the HAVING clause. 5. Evaluate the SELECT clause. Discard columns that are not specified in the SELECT clause. (In case of SELECT FIRST n… UNION SELECT …, the first n rows of the result from the union are chosen.) 6. Perform any unions. Combine result tables as specified in the UNION clause. (In case of SELECT FIRST n… UNION SELECT …, the first n rows of the result from the union are chosen.) 7. Apply for the ORDER BY clause. Sort the result rows as specified. Steps to process a query: parsing, validation, resolution, optimization, plan compilation, execution. The architecture of query engines: Query processing algorithms iterate over members of input sets; algorithms are algebra operators. The physical algebra is the set of operators, data representations, and associated cost functions that the database execution engine supports, while the logical algebra is more related to the data model and expressible queries of the data model (e.g. SQL). Synchronization and transfer between operators are key. Naïve query plan methods include the creation of temporary files/buffers, using one process per operator, and using IPC. The practical method is to implement all operators as a set of procedures (open, next, and close), and have operators schedule each other within a single process via simple function calls. Each time an operator needs another piece of data ("granule"), it calls its data input operator's next function to produce one. Operators structured in such a manner are called iterators. Note: Three SQL relational algebra query plans one pushed, nearly fully pushed Query plans are algebra expressions and can be represented as trees. Left-deep (every right subtree is a leaf), right-deep (every left-subtree is a leaf), and bushy (arbitrary) are the three common structures. In a left-deep tree, each operator draws input from one input and an inner loop integrates over the other input.
Database Systems Handbook BY: MUHAMMAD SHARIF 208
Database Systems Handbook BY: MUHAMMAD SHARIF 209 Cost Estimation The cost estimation of a query evaluation plan is calculated in terms of various resources that include: Number of disk accesses. Execution time is taken by the CPU to execute a query. Query Optimization Summary of steps of processing an SQL query: Lexical analysis, parsing, validation, Query Optimizer, Query Code Generator, Runtime Database Processor The term optimization here has the meaning “choose a reasonably efficient strategy” (not necessarily the best strategy) Query optimization: choosing a suitable strategy to execute a particular query more efficiently An SQL query undergoes several stages: lexical analysis (scanning, LEX), parsing (YACC), validation Scanning: identify SQL tokens Parser: check the query syntax according to the SQL grammar Validation: check that all attributes/relation names are valid in the particular database being queried Then create the query tree or the query graph (these are internal representations of the query) Main techniques to implement query optimization  Heuristic rules (to order the execution of operations in a query)  Computing cost estimates of different execution strategies Process for heuristics optimization 1. The parser of a high-level query generates an initial internal representation; 2. Apply heuristics rules to optimize the internal representation. 3. A query execution plan is generated to execute groups of
Database Systems Handbook BY: MUHAMMAD SHARIF 210 operations based on the access paths available on the files involved in the query.
Database Systems Handbook BY: MUHAMMAD SHARIF 211 Query optimization Example: Basic algorithms for executing query operations/ query optimization Sorting External sorting is a basic ingredient of relational operators that use sort-merge strategies Sorting is used implicitly in SQL in many situations: Order by clause, join a union, intersection, duplicate elimination distinct.
Database Systems Handbook BY: MUHAMMAD SHARIF 212 Sorting can be avoided if we have an index (ordered access to the data) External Sorting: (sorting large files of records that don’t fit entirely in the main memory) Internal Sorting: (sorting files that fit entirely in the main memory) All sorting in "real" database systems uses merging techniques since very large data sets are expected. Sorting modules' interfaces should follow the structure of iterators. Exploit the duality of quicksort and mergesort. Sort proceeds in divide phase and combines phase. One of the two phases is based on logical keys (indexes), the physically arranges data items (which phase is logical is particular to an algorithm). Two sub algorithms: one for sorting a run within main memory, another for managing runs on disk or tape. The degree of fan-in (number of runs merged in a given step) is a key parameter. External sorting: The first step is bulk loading the B+ tree index (i.e., sort data entries and records). Useful for eliminating duplicate copies in a collection of records (Why?) Sort-merge join algorithm involves sorting. Hashing Hashing should be considered for equality matches, in general. Hashing-based query processing algos use the in-memory hash table of database objects; if data in the hash table is bigger than the main memory (common case), then hash table overflow occurs. Three techniques for overflow handling exist: Avoidance: input set is partitioned into F files before any in-memory hash table is built. Partitions can be dealt with independently. Partition sizes must be chosen well, or recursive partitioning will be needed. Resolution: assume overflow won't occur; if it does, partition dynamically. Hybrid: like resolution, but when partition, only write one partition to disk, keep the rest in memory. Database tuning
Database Systems Handbook BY: MUHAMMAD SHARIF 213 END
Database Systems Handbook BY: MUHAMMAD SHARIF 214 CHAPTER 10 FILE STRUCTURES, INDEXING, AND HASHING Overview: Relative data and information is stored collectively in file formats. A file is a sequence of records stored in binary format. File Organization File Organization defines how file records are mapped onto disk blocks. We have four types of File Organization to organize file records − Sorted Files: Best if records must be retrieved in some order, or only a `range’ of records is needed. Sequential File Organization Store records in sequential order based on the value of the search key of each record. Each record organized by index or key process is called a sequential file organization that would be much faster to find records based on the key. Hashing File Organization A hash function is computed on some attribute of each record; the result specifies in which block of the file the record is placed. Data structures to organize records via trees or hashing on some key Called a hashing file organization. Heap File Organization A record can be placed anywhere in the file where there is space; there is no ordering in the file. Some records are organized randomly Called a heap file organization. Every record can be placed anywhere in the table file, wherever there is space for the record Virtually all databases provide heap file organization. Heap file organized table can search through the entire table file, looking for all rows where the value of account_id is A-591. This is called a file scan. Note: Generally, each relation is stored in a separate file. Clustered File Organization
Database Systems Handbook BY: MUHAMMAD SHARIF 215 Clustered file organization is not considered good for large databases. In this mechanism, related records from one or more relations are kept in the same disk block, that is, the ordering of records is not based on the primary key or search key. File Operations Operations on database files can be broadly classified into two categories − 1. Update Operations 2. Retrieval Operations Update operations change the data values by insertion, deletion, or update. Retrieval operations, on the other hand, do not alter the data but retrieve them after optional conditional filtering. In both types of operations, selection plays a significant role. Other than the creation and deletion of a file, there could be several operations, which can be done on files. Open − A file can be opened in one of the two modes, read mode or write mode. In read mode, the operating system does not allow anyone to alter data. In other words, data is read-only. Files opened in reading mode can be shared among several entities. Write mode allows data modification. Files opened in write mode can be read but cannot be shared. Locate − Every file has a file pointer, which tells the current position where the data is to be read or written. This pointer can be adjusted accordingly. Using the find (seek) operation, it can be moved forward or backward. Read − By default, when files are opened in reading mode, the file pointer points to the beginning of the file. There are options where the user can tell the operating system where to locate the file pointer at the time of opening a file. The very next data to the file pointer is read. Write − Users can select to open a file in write mode, which enables them to edit its contents. It can be deletion, insertion, or modification. The file pointer can be located at the time of opening or can be dynamically changed if the operating system allows it to do so. Close − This is the most important operation from the operating system’s point of view. When a request to close a file is generated, the operating system removes all the locks (if in shared mode). Tree-Structured Indexing
Database Systems Handbook BY: MUHAMMAD SHARIF 216
Database Systems Handbook BY: MUHAMMAD SHARIF 217
Database Systems Handbook BY: MUHAMMAD SHARIF 218 Indexing Indexing is a data structure technique to efficiently retrieve records from the database files based on some attributes on which the indexing has been done. Indexing in database systems is like what we see in books. Indexing is defined based on its indexing attributes. Indexing can be of the following types − 1. Primary Index − Primary index is defined on an ordered data file. The data file is ordered on a key field. The key field is generally the primary key of the relation. 2. Secondary Index − Secondary index may be generated from a field that is a candidate key and has a unique value in every record, or a non-key with duplicate values. 3 Clustering index-The clustering index is defined on an ordered data file. The data file is ordered on a non- key field. In a clustering index, the search key order corresponds to the sequential order of the records in the data file. If the search key is a candidate key (and therefore unique) it is also called a primary index. 4 Non-Clustering The Non-Clustering indexes are used to quickly find all records whose values in a certain field satisfy some condition. Non-clustering index (different order of data and index). Non-clustering Index
Database Systems Handbook BY: MUHAMMAD SHARIF 219 whose search key specifies an order different from the sequential order of the file. Non-clustering indexes are also called secondary indexes. Depending on what we put into the index we have a Sparse index (index entry for some tuples only) Dense index (index entry for each tuple) A clustering index is usually sparse(Clustering indexes can be dense or sparse.) A non-clustering index must be dense Ordered Indexing is of two types − 1. Dense Index 2. Sparse Index Dense Index In a dense index, there is an index record for every search key value in the database. This makes searching faster but requires more space to store index records themselves. Index records contain a search key value and a pointer to the actual record on the disk.
Database Systems Handbook BY: MUHAMMAD SHARIF 220 Sparse Index In a sparse index, index records are not created for every search key. An index record here contains a search key and an actual pointer to the data on the disk. To search a record, we first proceed by index record and reach the actual location of the data. If the data we are looking for is not where we directly reach by following the index, then the system starts a sequential search until the desired data is found. Multilevel Index Index records comprise search-key values and data pointers. The multilevel index is stored on the disk along with the actual database files. As the size of the database grows, so does the size of the indices. There is an immense need to keep the index records in the main memory to speed up the search operations. If the single-level index is used, then a large size index cannot be kept in memory which leads to multiple disk accesses.
Database Systems Handbook BY: MUHAMMAD SHARIF 221 A multi-level Index helps in breaking down the index into several smaller indices to make the outermost level so small that it can be saved in a single disk block, which can easily be accommodated anywhere in the main memory. B+ Tree A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf nodes of a B+ tree denote actual data pointers. B+ tree ensures that all leaf nodes remain at the same height, thus balanced. Additionally, the leaf nodes are linked using a link list; therefore, a B+ tree can support random access as well as sequential access. Structure of B+ Tree Every leaf node is at an equal distance from the root node. A B+ tree is of the order n where n is fixed for every B+ tree.
Database Systems Handbook BY: MUHAMMAD SHARIF 222 Internal nodes − Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node. At most, an internal node can contain n pointers. Leaf nodes − Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values. At most, a leaf node can contain n record pointers and n key values. Every leaf node contains one block pointer P to point to the next leaf node and forms a linked list. Hash Organization Hashing uses hash functions with search keys as parameters to generate the address of a data record. Bucket − A hash file stores data in bucket format. The bucket is considered a unit of storage. A bucket typically stores one complete disk block, which in turn can store one or more records. Hash Function − A hash function, h, is a mapping function that maps all the set of search keys K to the address where actual records are placed. It is a function from search keys to bucket addresses. Types of Hashing Techniques There are mainly two types of SQL hashing methods/techniques: 1 Static Hashing 2 Dynamic Hashing/Extendible hashing Static Hashing In static hashing, when a search-key value is provided, the hash function always computes the same address. Static hashing is further divided into: 1. Open hashing 2. Close hashing.
Database Systems Handbook BY: MUHAMMAD SHARIF 223 Dynamic Hashing or Extendible hashing Dynamic hashing offers a mechanism in which data buckets are added and removed dynamically and on demand. In this hashing, the hash function helps you to create a large number of values. The problem with static hashing is that it does not expand or shrink dynamically as the size of the database grows or shrinks. Dynamic hashing provides a mechanism in which data buckets are added and removed dynamically and on-demand. Dynamic hashing is also known as extended hashing. Key terms when dealing with hashing the records: Bucket Overflow The condition of bucket-overflow is known as a collision. This is a fatal state for any static hash function. In this case, overflow chaining can be used. Overflow Chaining − When buckets are full, a new bucket is allocated for the same hash result and is linked after the previous one. This mechanism is called Closed Hashing. Linear Probing − When a hash function generates an address at which data is already stored, the next free bucket is allocated to it. This mechanism is called Open Hashing. Data bucket – Data buckets are memory locations where the records are stored. It is also known as a Unit of storage. Key: A DBMS key is an attribute or set of an attribute that helps you to identify a row(tuple) in a relation(table). This allows you to find the relationship between two tables.
Database Systems Handbook BY: MUHAMMAD SHARIF 224 Hash function: A hash function, is a mapping function that maps all the set of search keys to the address where actual records are placed. Linear Probing – Linear probing is a fixed interval between probes. In this method, the next available data block is used to enter the new record, instead of overwriting the older record. Quadratic probing– It helps you to determine the new bucket address. It helps you to add Interval between probes by adding the consecutive output of quadratic polynomial to starting value given by the original computation. Hash index – It is an address of the data block. A hash function could be a simple mathematical function to even a complex mathematical function. Double Hashing –Double hashing is a computer programming method used in hash tables to resolve the issues of a collision. Bucket Overflow: The condition of bucket overflow is called a collision. This is a fatal stage for any static to function. Hashing function h(r) Mapping from the index’s search key to a bucket in which the (data entry for) record r belongs. What is Collision? Hash collision is a state when the resultant hashes from two or more data in the data set, wrongly map the same place in the hash table. How to deal with Hashing Collision? There is two technique that you can use to avoid a hash collision: 1. Rehashing: This method, invokes a secondary hash function, which is applied continuously until an empty slot is found, where a record should be placed. 2. Chaining: The chaining method builds a Linked list of items whose key hashes to the same value. This method requires an extra link field to each table position. An index is an on-disk structure associated with a table or view that speeds the retrieval of rows from the table or view. An index contains keys built from one or more columns in the table or view. Indexes are automatically created when PRIMARY KEY and UNIQUE constraints are defined on table columns. An index on a file speeds up selections on the search key fields for the index. The index is a collection of buckets. Bucket = primary page plus zero or more overflow pages. Buckets contain data entries.
Database Systems Handbook BY: MUHAMMAD SHARIF 225 Types of Indexes 1 Clustered Index 2 Non-Clustered Index 3 Column Store Index 4 Filtered Index 5 Hash-based Index 6 Dense primary index 7 sparse index 8 b or b+ tree index 9 FK index 10 Secondary index 11 File Indexing – B+ Tree 12 Bitmap Indexing 13 Inverted Index 14 Forward Index 15 Function-based index 16 Spatial index 17 Bitmap Join Index 18 Composite index 19 Primary key index If the search key contains a primary key, then it is called a primary index. 20 Unique index: Search key contains a candidate key. 21 Multilevel index(A multilevel index considers the index file, which we will now refer to as the first (or base) level of a multilevel index, as an ordered file with a distinct value for each K(i)) 22 Inner index: The main index file for the data 23 Outer index: A sparse index on the index
Database Systems Handbook BY: MUHAMMAD SHARIF 226 END
Database Systems Handbook BY: MUHAMMAD SHARIF 227 CHAPTER 11 DATABASE USERS AND DATABASE SECURITY MANAGEMENT Overview of User and Schema in Oracle DBMS environment A schema is a collection of database objects, including logical structures such as tables, views, sequences, stored procedures, synonyms, indexes, clusters, and database links. A user owns a schema. A user and a schema have the same name.
Database Systems Handbook BY: MUHAMMAD SHARIF 228 DBA basic roles and responsibilities Duties of the DBA A Database administrator has some very precisely defined duties which need to be performed by the DBA very religiously. A short account of these jobs is listed below: 1. Schema definition 2. Granting data access 3. Routine Maintenance 4. Backups Management 5. Monitoring jobs running 6. Installation and integration 7. Configuration and migration 8. Optimization and maintenance 9. administration and Customization 10. Upgradation and backup recovery 11. Database storage reorganization 12. Performance monitoring 13. Tablespace and Monitoring disk storage space Roles Category Normally Organization hires DBA in three roles: 1. L1=Junior/fresher dba, having 1–2-year exp. 2. L2=Intermediate dba, having 2+ to 4-year exp. 3. L3=Advanced/Expert dba, having 4+ to 6-year exp. Component modules of a DBMS and their interactions.
Database Systems Handbook BY: MUHAMMAD SHARIF 229 Create Database user Command The Create User command creates a user. It also automatically creates a schema for that user. The Schema Also Logical Structure to process the data in the Database(Memory Component). It's created automatically by Oracle when the user is created. Create Profile SQL> Create profile clerk limit sessions_per_user 1 idle_time 30 connect_time 600; Create User SQL> Create user dcranney identified by bedrock default tablespace users temporary tablespace temp_ts profile clerk quota 500k on users1 quota 0 on test_ts quota unlimited on users;
Database Systems Handbook BY: MUHAMMAD SHARIF 230 Roles And Privileges What Is Role Roles are grouping of SYSTEM PRIVILEGES AND/OR OBJECT PRIVILEGES. Managing and controlling privileges is much easier when using roles. You can create roles, grant system and object privilege to the roles and grant roles to the user. Example of Roles: CONNECT, RESOURCE & DBA roles are pre-defined roles. These are created by oracle when the database is created. You can grant these roles when you create a user. Syntax to check roles we use following command: SYS> select * from ROLE_SYS_PRIVS where role='CONNECT'; SYS> select * from ROLE_SYS_PRIVS where role = 'DBA'; Note: A DBA role does NOT include startup & shutdown the databases. Roles are group of privileges under a single name. Those privileges are assigned to users through ROLES. When you adding or deleting a privilege from a role, all users and roles that are assigned that role automatically receive or lose that privilege. Assigning password to role is optional. Whenever you create a role that is NOT IDENTIFIED or IDENTIFIED EXTERNALLY or BY PASSWORD, then oracle grants you the role WITH ADMIN OPTION. If you create a role IDENTIFIED GLOBALLY, then the database does NOT grant you the role. If you omit both NOT IDENTIFIED/IDENTIFIED clause then default goes to NOT IDENTIFIED clause. CREATE A ROLE SYS> create role SHARIF IDENTIFIED BY devdb; GRANTING SYSTEM PRIVILEGES TO A ROLE SYS> GRANT create table, create view, create synonym, create sequence, create trigger to SHARIF; Grant succeeded GRANT A ROLE TO USERS SYS> grant SHARIF to sony, scott;
Database Systems Handbook BY: MUHAMMAD SHARIF 231 ACTIVATE A ROLE SCOTT> set role SHARIF identified by devdb; TO DISABLING ALL ROLE SCOTT> set role none; GRANT A PRIVILEGE SYS> grant create any table to SHARIF; REVOKE A PRIVILEGE SYS> revoke create any table from SHARIF; SET ALL ROLES ASSIGNED TO scott AS DEFAULT SYS> alter user scott default role all; SYS> alter user scott default role SHARIF;
Database Systems Handbook BY: MUHAMMAD SHARIF 232 Grants and revoke Privileges/Role/Objects to users Sql> grant insert, update, delete, select on hr. employees to Scott; Grant succeeded. Sql> grant insert, update, delete, select on hr.departments to Scott; Grant succeeded. Sql> grant flashback on hr. employees to Scott; Grant succeeded. Sql> grant flashback on hr.departments to Scott; Grant succeeded. Sql> grant select any transaction to Scott; Sql> Grant create any table,alter/select/insert/update/delete/drop any table to dba/sharif; Grant succeeded. SHAM> grant all on EMP to SCOTT; Grant succeeded. SHAM> grant references on EMP to SCOTT; Grant succeeded. Sql> Revoke all suppliers from the public; SHAM> revoke all on EMP from SCOTT; SHAM> revoke references on EMP from SCOTT CASCADE CONSTRAINTS; Grant succeeded. SHAM> grant select on EMP to PUBLIC; SYS> grant create session to PUBLIC; Grant succeeded. Note: If a privilege has been granted to PUBLIC, all users in the database can use it. Note: Public acts like a ROLE, sometimes acts like a USER. Note: NOTE: Is there DROP TABLE PRIVILEGE in oracle? NO. DROP TABLE is NOT a PRIVILEGE. What is Privilege Privilege is special right or permission. Privileges are granted to perform operations in a database. Example of Privilege: CREATE SESSION privilege is used to a user connect to the oracle database. The syntax for revoking privileges on a table in oracle is: Revoke privileges on the object from a user; Privileges can be assigned to a user or a role. Privileges are given to users with GRANT command and taken away with REVOKE command. There are two distinct type of privileges. 1. SYSTEM PRIVILEGES (Granted by DBA like ALTER DATABASE, ALTER SESSION, ALTER SYSTEM, CREATE USER) 2. SCHEMA OBJECT PRIVILEGES. SYSTEM privileges are NOT directly related to any specific object or schema. Two type of users can GRANT, REVOKE SYSTEM PRIVILEGES to others.  User who have been granted specific SYSTEM PRIVILEGE WITH ADMIN OPTION.  User who have been granted GRANT ANY PRIVILEGE. You can GRANT and REVOKE system privileges to the users and roles. Powerful system Privileges DBA, SYSDBA, SYSOPER(Roles or Privilleges); SYS, SYSTEM (tablespace or user)
Database Systems Handbook BY: MUHAMMAD SHARIF 233
Database Systems Handbook BY: MUHAMMAD SHARIF 234 OBJECT privileges are directly related to specific object or schema.  GRANT -> To assign privileges or roles to a user, use GRANT command.  REVOKE -> To remove privileges or roles from a user, use REVOKE command. Object privilege is the permission to perform certain action on a specific schema objects, including tables, views, sequence, procedures, functions, packages.
Database Systems Handbook BY: MUHAMMAD SHARIF 235  SYSTEM PRIVILEGES can be granted WITH ADMIN OPTION.  OBJECT PRIVILEGES can be granted WITH GRANT OPTION.
Database Systems Handbook BY: MUHAMMAD SHARIF 236
Database Systems Handbook BY: MUHAMMAD SHARIF 237 Admin And Grant Options With ADMIN Option (to USER, Role) SYS> select * from dba_sys_privs where grantee in('A','B','C'); GRANTEE PRIVILEGE ADM ------------------------------------------------ C CREATE SESSION YES Note: By default ADM column in dba-sys_privs is NO. If you revoke a SYSTEM PRIVILEGE from a user, it has NO IMPACT on GRANTS that user has made. With GRANT Opetion (to USER, Role)
Database Systems Handbook BY: MUHAMMAD SHARIF 238 SONY can access user sham.emp table because SELECT PRIVILEGE given to ‘PUBLIC’. So that sham.emp is available to everyone of the database. SONY has created a view EMP_VIEW based on sham.emp. Note: If you revoke OBJECT PRIVILEGE from a user, that privilege also revoked to whom it was granted.
Database Systems Handbook BY: MUHAMMAD SHARIF 239 Note: If you grant RESOURCE role to the user, this privilege overrides all explicit tablespace quotas. The UNLIMITED TABLESPACE system privilege lets the user allocate as much space in any tablespaces that make up the database. Database account locks and unlock Alter user admin identified by admin account lock; Select u.username from all_users u where u.username like 'info'; Database security and non-database(non database ) security
Database Systems Handbook BY: MUHAMMAD SHARIF 240 END
Database Systems Handbook BY: MUHAMMAD SHARIF 241 CHAPTER 12 BUSINESS INTELLIGENCE TERMINOLOGIES IN DATABASE SYSTEMS Overview: Database systems are used for processing day-to-day transactions, such as sending a text or booking a ticket online. This is also known as online transaction processing (OLTP). Databases are good for storing information about and quickly looking up specific transactions. Decision support systems (DSS) are generally defined as the class of warehouse system that deals with solving a semi-structured problem. DSS DSS helps businesses make sense of data so they can undergo more informed management decision-making. It has three branches DWH, OLAP, and DM. I will discuss this in detail below. Characteristics of a decision support system DSS frameworks typically consist of three main components or characteristics: The model management system: Uses various algorithms in creating, storing, and manipulating data models The user interface: The front-end program enables end users to interact with the DSS The knowledge base: A collection or summarization of all information including raw data, documents, and personal knowledge What is a data warehouse? A data warehouse is a collection of multidimensional, organization-wide data, typically used in business decision- making. Data warehouse toolkits for building out these large repositories generally use one of two architectures. Different approaches to building a data warehouse concentrate on the data storage layer: Inmon’s approach – designing centralized storage first and then creating data marts from the summarized data warehouse data and metadata. Type is Normalized. Focuses on data reorganization using relational database management systems (RDBMS) Holds simple relational data between a core data repository and data marts, or subject-oriented databases Ad-hoc SQL queries needed to access data are simple
Database Systems Handbook BY: MUHAMMAD SHARIF 242 Kimball’s approach – creating data marts first and then developing a data warehouse database incrementally from independent data marts. Type is Denormalized. Focuses on infrastructure functionality using multidimensional database management systems (MDBMS) like star schema or snowflake schema Data Warehouse vs. Transactional System Following are a few differences between Data Warehouse and Operational Database (Transaction System) A transactional system is designed for known workloads and transactions like updating a user record, searching a record, etc. However, DW transactions are more complex and present a general form of data. A transactional system contains the current data of an organization whereas DW normally contains historical data. The transactional system supports the parallel processing of multiple transactions. Concurrency control and recovery mechanisms are required to maintain the consistency of the database. An operational database query allows to read and modify operations (delete and update), while an OLAP query needs only read-only access to stored data (select statement). DW involves data cleaning, data integration, and data consolidations. DW has a three-layer architecture − Data Source Layer, Integration Layer, and Presentation Layer. The following diagram shows the common architecture of a Data Warehouse system.
Database Systems Handbook BY: MUHAMMAD SHARIF 243 Types of Data Warehouse System Following are the types of DW systems − 1. Data Mart 2. Online Analytical Processing (OLAP) 3. Online Transaction Processing (OLTP) 4. Predictive Analysis Three-Tier Data Warehouse Architecture Generally, a data warehouse adopts a three-tier architecture. Following are the three tiers of the data warehouse architecture. Bottom Tier − The bottom tier of the architecture is the data warehouse database server. It is a relational database system. We use the back-end tools and utilities to feed data into the bottom tier. These back-end tools and utilities perform the Extract, Clean, Load, and refresh functions. Middle Tier − In the middle tier, we have the OLAP Server that can be implemented in either of the following ways. By Relational OLAP (ROLAP), which is an extended relational database management system. The ROLAP maps the operations on multidimensional data to standard relational operations. By Multidimensional OLAP (MOLAP) model, directly implements the multidimensional data and operations. Top-Tier − This tier is the front-end client layer. This layer holds the query tools and reporting tools, analysis tools, and data mining tools. The following diagram depicts the three-tier architecture of the data warehouse −
Database Systems Handbook BY: MUHAMMAD SHARIF 244 Data Warehouse Models From the perspective of data warehouse architecture, we have the following data warehouse models − Virtual Warehouse 1. Data mart 2. Enterprise Warehouse 3. Virtual Warehouse The view over an operational data warehouse is known as a virtual warehouse. It is easy to build a virtual warehouse. Building a virtual warehouse requires excess capacity on operational database servers.
Database Systems Handbook BY: MUHAMMAD SHARIF 245 Building A Data Warehouse From Scratch: A Step-By-Step Plan Step 1. Goals elicitation Step 2. Conceptualization and platform selection Step 3. Business case and project roadmap Step 4. System analysis and data warehouse architecture design Step 5. Development and stabilization Step 6. Launch Step 7. After-launch support
Database Systems Handbook BY: MUHAMMAD SHARIF 246 Data Mart A data mart(s) can be created from an existing data warehouse—the top-down approach—or other sources, such as internal operational systems or external data. Similar to a data warehouse, it is a relational database that stores transactional data (time value, numerical order, reference to one or more objects) in columns and rows making it easy to organize and access. Data marts and data warehouses are both highly structured repositories where data is stored and managed until it is needed. Data marts are designed for a specific line of business and DWH is designed for enterprise-wide range use. The data mart is >100 and DWH is >100 and the Data mart is a single subject but DWH is a multiple subjects repository. Data marts are independent data marts and dependent data marts. Data mart contains a subset of organization-wide data. This subset of data is valuable to specific groups of an organization.
Database Systems Handbook BY: MUHAMMAD SHARIF 247 Fact and Dimension Tables Type of facts Explanation Additive Measures should be added to all dimensions. Semi-Additive In this type of fact, measures may be added to some dimensions and not to others. Non-Additive It stores some basic units of measurement of a business process. Some real-world examples include sales, phone calls, and orders. Types of Dimensions Definition Conformed Dimensions Conformed dimensions are the very fact to which it relates. This dimension is used in more than one-star schema or Datamart. Outrigger Dimensions A dimension may have a reference to another dimension table. These secondary dimensions are called outrigger dimensions. This kind of Dimension should be used carefully. Shrunken Rollup Dimensions Shrunken Rollup dimensions are a subdivision of rows and columns of a base dimension. These kinds of dimensions are useful for developing aggregated fact tables. Dimension-to- Dimension Table Joins Dimensions may have references to other dimensions. However, these relationships can be modeled with outrigger dimensions. Role-Playing Dimensions A single physical dimension helps to reference multiple times in a fact table as each reference links to a logically distinct role for the dimension. Junk Dimensions It is a collection of random transactional codes, flags, or text attributes. It may not logically belong to any specific dimension. Degenerate Dimensions A degenerate dimension is without a corresponding dimension. It is used in the transaction and collecting snapshot fact tables. This kind of dimension does not have its dimension as it is derived from the fact table. Swappable Dimensions They are used when the same fact table is paired with different versions of the same dimension.
Database Systems Handbook BY: MUHAMMAD SHARIF 248 Type of facts Explanation Step Dimensions Sequential processes, like web page events, mostly have a separate row in a fact table for every step in a process. It tells where the specific step should be used in the overall session. Extract Transform Load Tool configuration (ETL/ELT) Successful data migration includes: Extracting the existing data. Transforming data so it matches the new formats. Cleansing the data to address any quality issues. Validating the data to make sure the move goes as planned. Loading the data into the new system. Staging area
Database Systems Handbook BY: MUHAMMAD SHARIF 249 ETL Cycle Flow ETL to Data warehouse, OLAP, Business Reporting Tiers
Database Systems Handbook BY: MUHAMMAD SHARIF 250 Types of Data Warehouse Extraction Methods There are two types of data warehouse extraction methods: Logical and Physical extraction methods. 1. Logical Extraction The logical Extraction method in turn has two methods: i) Full Extraction For example, exporting a complete table in the form of a flat file. ii) Incremental Extraction In incremental extraction, the changes in source data need to be tracked since the last successful extraction. 2. Physical Extraction Physical extraction has two methods: Online and Offline extraction: i) Online Extraction In this process, the extraction process directly connects to the source system and extracts the source data. ii) Offline Extraction The data is not extracted directly from the source system but is staged explicitly outside the source system. Data Capture Data capture is an advanced extraction process. It enables the extraction of data from documents, converting it into machine-readable data. This process is used to collect important organizational information when the source systems are in the form of paper/electronic documents (receipts, emails, contacts, etc.) OLAP Model and Its types Online Analytical Processing (OLAP) is a tool that enables users to perform data analysis from various database systems simultaneously. Users can use this tool to extract, query, and retrieve data. OLAP enables users to analyze the collected data from diverse points of view. There are three main types of OLAP servers as follows: ROLAP stands for Relational OLAP, an application based on relational DBMSs. MOLAP stands for Multidimensional OLAP, an application based on multidimensional DBMSs. HOLAP stands for Hybrid OLAP, an application using both relational and multidimensional techniques. OLAP Architecture has these three components of each type: Database server. Rolap/molap/holap server. Front-end tool.
Database Systems Handbook BY: MUHAMMAD SHARIF 251
Database Systems Handbook BY: MUHAMMAD SHARIF 252 Characteristics of OLAP In the FASMI characteristics of OLAP methods, the term derived from the first letters of the characteristics are: Fast It defines which system is targeted to deliver the most feedback to the client within about five seconds, with the elementary analysis taking no more than one second and very few taking more than 20 seconds. Analysis It defines which method can cope with any business logic and statistical analysis that is relevant for the function and the user, and keep it easy enough for the target client. Although some preprogramming may be needed we do not think it acceptable if all application definitions have to allow the user to define new Adhoc calculations as part of the analysis and to document the data in any desired method, without having to program so we exclude products (like Oracle Discoverer) that do not allow the user to define new Adhoc calculation as part of the analysis and to document on the data in any desired product that do not allow adequate end user-oriented calculation flexibility. Share It defines which the system tools all the security requirements for understanding and, if multiple write connection is needed, concurrent update location at an appropriated level, not all functions need the customer to write data back, but for the increasing number which does, the system should be able to manage multiple updates in a timely, secure manner. Multidimensional This is the basic requirement. OLAP system must provide a multidimensional conceptual view of the data, including full support for hierarchies, as this is certainly the most logical method to analyze businesses and organizations.
Database Systems Handbook BY: MUHAMMAD SHARIF 253 OLAP Operations Since OLAP servers are based on a multidimensional view of data, we will discuss OLAP operations in multidimensional data. Here is the list of OLAP operations − 1. Roll-up 2. Drill-down 3. Slice and dice 4. Pivot (rotate)
Database Systems Handbook BY: MUHAMMAD SHARIF 254 Roll-up Roll-up performs aggregation on a data cube in any of the following ways − By climbing up a concept hierarchy for a dimension By dimension reduction The following diagram illustrates how roll-up works.
Database Systems Handbook BY: MUHAMMAD SHARIF 255 Roll-up is performed by climbing up a concept hierarchy for the dimension location. Initially the concept hierarchy was "street < city < province < country". On rolling up, the data is aggregated by ascending the location hierarchy from the level of the city to the level of the country. The data is grouped into cities rather than countries. When roll-up is performed, one or more dimensions from the data cube are removed. Drill-down Drill-down is the reverse operation of roll-up. It is performed in either of the following ways − By stepping down a concept hierarchy for a dimension By introducing a new dimension. The following diagram illustrates how drill-down works −
Database Systems Handbook BY: MUHAMMAD SHARIF 256 Drill-down is performed by stepping down a concept hierarchy for the dimension time. Initially, the concept hierarchy was "day < month < quarter < year." On drilling down, the time dimension descended from the level of the quarter to the level of the month. When drill-down is performed, one or more dimensions from the data cube are added. It navigates the data from less detailed data to highly detailed data. Slice The slice operation selects one particular dimension from a given cube and provides a new sub-cube. Consider the following diagram that shows how a slice works. Here Slice is performed for the dimension "time" using the criterion time = "Q1".
Database Systems Handbook BY: MUHAMMAD SHARIF 257 It will form a new sub-cube by selecting one or more dimensions. Dice Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider the following diagram that shows the dice operation. The dice operation on the cube based on the following selection criteria involves three dimensions. (location = "Toronto" or "Vancouver") (time = "Q1" or "Q2") (item =" Mobile" or "Modem") Pivot The pivot operation is also known as rotation. It rotates the data axes in view to provide an alternative presentation of data. Consider the following diagram that shows the pivot operation.
Database Systems Handbook BY: MUHAMMAD SHARIF 258 Data mart also have Hybrid Data Marts A hybrid data mart combines data from an existing data warehouse and other operational source systems. It unites the speed and end-user focus of a top-down approach with the benefits of the enterprise-level integration of the bottom-up method. Data mining techniques There are many techniques used by data mining technology to make sense of your business data. Here are a few of the most common: Association rule learning: Also known as market basket analysis, association rule learning looks for interesting relationships between variables in a dataset that might not be immediately apparent, such as determining which products are typically purchased together. This can be incredibly valuable for long-term planning. Classification: This technique sorts items in a dataset into different target categories or classes based on common features. This allows the algorithm to neatly categorize even complex data cases. Clustering: This approach groups similar data in a cluster. The outliers may be undetected or they will fall outside the clusters. To help users understand the natural groupings or structure within the data, you can apply the process of partitioning a dataset into a set of meaningful sub-classes called clusters. This process looks at all the objects in the dataset and groups them together based on similarity to each other, rather than on predetermined features. Modeling is what people often think of when they think of data mining. Modeling is the process of taking some data (usually) and building a model that reflects that data. Usually, the aim is to address a specific problem through modeling the world in some way and from the model develop a better understanding of the world. Decision tree: Another method for categorizing data is the decision tree. This method asks a series of cascading questions to sort items in the dataset into relevant classes. Regression: This technique is used to predict a range of numeric values, such as sales, temperatures, or stock prices, based on a particular data set.
Database Systems Handbook BY: MUHAMMAD SHARIF 259 Here data can be made smooth by fitting it to a regression function. The regression used may be linear (having one independent variable) or multiple (having multiple independent variables). Regression is a technique that conforms data values to a function. Linear regression involves finding the “best” line to fit two attributes (or variables) so that one attribute can be used to predict the other. Outer detection: This type of data mining technique refers to the observation of data items in the dataset which do not match an expected pattern or expected behavior. This technique can be used in a variety of domains, such as intrusion, detection, fraud or fault detection, etc. Outer detection is also called Outlier Analysis or Outlier mining. Sequential Patterns: This data mining technique helps to discover or identify similar patterns or trends in transaction data for a certain period. Prediction: Where the end user can predict the most repeated things. Knowledge Extraction from Business intelligence techniques
Database Systems Handbook BY: MUHAMMAD SHARIF 260
Database Systems Handbook BY: MUHAMMAD SHARIF 261 Data quality and data management components
Database Systems Handbook BY: MUHAMMAD SHARIF 262
Database Systems Handbook BY: MUHAMMAD SHARIF 263
Database Systems Handbook BY: MUHAMMAD SHARIF 264
Database Systems Handbook BY: MUHAMMAD SHARIF 265
Database Systems Handbook BY: MUHAMMAD SHARIF 266 Steps/tasks Involved in Data Preprocessing 1 Data Cleaning: The data can have many irrelevant and missing parts. To handle this part, data cleaning is done. It involves handling missing data, noisy data, etc. Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies 2 Data Transformation: This step is taken to transform the data into appropriate forms suitable for the mining process. 3 Data discretization Part of data reduction but with particular importance especially for numerical data 4 Data Reduction: Since data mining is a technique that is used to handle a huge amount of data. While working with a huge volume of data, analysis became harder in such cases. To get rid of this, we use the data reduction technique. It aims to increase storage efficiency and reduce data storage and analysis costs. 5 Data integration Integration of multiple databases, data cubes, or files Method of treating missing data 1 Ignoring and discarding data 2 Fill in the missing value manually 3 Use the global constant to fill the mission values 4 Imputation using mean, median, or mod, 5 Replace missing values using a prediction/ classification model 6 K-Nearest Neighbor (k-NN) approach (The best approach)
Database Systems Handbook BY: MUHAMMAD SHARIF 267 Difference between Data steward and Data curator: Information Retrieval (IR) can be defined as a software program that deals with the organization, storage, retrieval, and evaluation of information from document repositories, particularly textual information. An Information Retrieval (IR) model selects and ranks the document that is required by the user or the user has asked for in the form of a query. Information Retrieval Data Retrieval The software program deals with the organization, storage, retrieval, and evaluation of information from document repositories, particularly textual information. Data retrieval deals with obtaining data from a database management system such as ODBMS. It is A process of identifying and retrieving the data from the database, based on the query provided by the user or application.
Database Systems Handbook BY: MUHAMMAD SHARIF 268 Information Retrieval Data Retrieval Retrieves information about a subject. Determines the keywords in the user query and retrieves the data. Small errors are likely to go unnoticed. A single error object means total failure. Not always well structured and is semantically ambiguous. Has a well-defined structure and semantics. Does not provide a solution to the user of the database system. Provides solutions to the user of the database system. The results obtained are approximate matches. The results obtained are exact matches. Results are ordered by relevance. Results are unordered by relevance. It is a probabilistic model. It is a deterministic model. Techniques of Information retrieval: 1. Traditional system 2. Non-traditional system. There are three types of Information Retrieval (IR) models: 1. Classical IR Model 2. Non-Classical IR Model 3. Alternative IR Model Let’s understand the classical IR models in further detail: 1. Boolean Model — This model required information to be translated into a Boolean expression and Boolean queries. The latter is used to determine the information needed to be able to provide the right match when the Boolean expression is found to be true. It uses Boolean operations AND, OR, NOT to create a combination of multiple terms based on what the user asks. 2. Vector Space Model — This model takes documents and queries denoted as vectors and retrieves documents depending on how similar they are. This can result in two types of vectors which are then used to rank search results either 3. Probability Distribution Model — In this model, the documents are considered as distributions of terms, and queries are matched based on the similarity of these representations. This is made possible using entropy or by computing the probable utility of the document. Probability distribution model types:  Similarity-based Probability Distribution Model  Expected-utility-based Probability Distribution Model
Database Systems Handbook BY: MUHAMMAD SHARIF 269
Database Systems Handbook BY: MUHAMMAD SHARIF 270 END
Database Systems Handbook BY: MUHAMMAD SHARIF 271 CHAPTER 13 DBMS INTEGRATION WITH BPMS Overview: BPMS,which are significant extensions of workflow management (WFM). DBMS and BPMS should be used simultaneously they give better performance. BPMS takes or holds operational data and DBMS holds transactional and log data but BPMS will hold All the transactional data go through BPMS. BPMS is run at the execution level. BPMS also holds document flow data. A key element of BPMN is the choice of shapes and icons used for the graphical elements identified in this specification. The intent is to create a standard visual language that all process modelers will recognize and understand. An implementation that creates and displays BPMN Process Diagrams SHALL use the graphical elements, shapes, and markers illustrated in this specification. Six Sigma is another set of practices that originate from manufacturing, in particular from engineering and production practices at Motorola. The main characteristic of Six Sigma is its focus on the minimization of defects (errors). Six Sigma places a strong emphasis on measuring the output of processes or activities, especially in terms of quality. Six Sigma encourages managers to systematically compare the effects of improvement initiatives on the outputs. Sigma symbolizes a single standard deviation from the mean. The two main Six Sigma methodologies are DMAIC and DMADV. Each has its own set of recommended procedures to be implemented for business transformation. DMAIC is a data-driven method used to improve existing products or services for better customer satisfaction. It is the acronym for the five phases: D – Define, M – Measure, A – Analyse, I – Improve, C – Control. DMAIC is applied in the manufacturing of a product or delivery of a service. DMADV is a part of the Design for Six Sigma (DFSS) process used to design or re-design different processes of product manufacturing or service delivery. The five phases of DMADV are: D – Define, M – Measure, A – Analyse, D – Design, V – Validate. A business process is a collection of related, structured activities that produce a specific service or a particular goal for a particular person(s). Business Process management (BPM) includes methods, techniques, and software to design, enact, control and analyze operational processes The BPM lifecycle is considered to have five stages: design, model, execute, monitor, optimize, and Process reengineering. The difference between BP and BPMS is defined as BPM is a discipline that uses various methods to discover, model, analyze, measure, improve, and optimize business processes. BPM is a method, technique, or way of being/doing and BPMS is a collection of technologies to help build software systems or applications to automate processes. BPMS is a software tool used to improve an organization’s business processes through the definition, automation, and analysis of business processes. It also acts as a valuable automation tool for businesses to generate a competitive advantage through cost reduction, process excellence, and continuous process improvement. As BPM is a discipline
Database Systems Handbook BY: MUHAMMAD SHARIF 272 used by organizations to identify, document, and improve their business processes; BPMS is used to enable aspects of BPM. Enactable business process model Curtisetal list five modeling goals: to facilitate human understanding andcommunication; to support process improvement; to support process management; toautomate process guidance; and to automate execution support. We suggest that thesegoals plus our additional goals of to automate process execution and to automateprocess management, are the goals of using a BPMS. These goals, which form aprogression from problem description to solution design and then action, would beimpossible to achieve without a process model.This is because an enactable model gives a BPMS a limited decision-making ability,the ability to generate change request signals to other sub-systems, or team“members,” and the ability to take account of endogenous or exogenous changes toitself, the business processes it manages or the environment. Together these abilitiesenable the BPMS to make automatic changes to business processes within a scopelimited to the cover of its decision rules, the control privileges of its change requestsignals and its ability to recognize patterns from its sensors.
Database Systems Handbook BY: MUHAMMAD SHARIF 273 Business Process Modeling Notation (BPMN) BPMS has elements, label, token, activity, case, event process, sequence symbols, etc
Database Systems Handbook BY: MUHAMMAD SHARIF 274 BPMN Task A logical unit of work that is carried out as a single whole Resource A person or a machine that can perform specific tasks Activity -the performance of a task by a resource Case A sequence of activities performed to achieve some goal, an order, an insurance claim, a car assembly Work item The combination of a case and a task that is just to be carried out Process Describes how a particular category of cases shall be managed Control flow construct ->sequence, selection, iteration, parallelisation
Database Systems Handbook BY: MUHAMMAD SHARIF 275 BPMN concepts Events Things that happen instantaneously (e.g. an invoice Activities Units of work that have a duration (e.g. an activity to Process, events, and activities are logically related Sequence The most elementary form of relation is Sequence, which implies that one event or activity A is followed by another event or activity B. Start event Circles used with a thin border End event Circles used with a thick border Label Give a name or label to each activity and event Token Once a process instance has been spawned/born, we use a token to identify the progress (or state) of that instance. Gateway There is a gating mechanism that either allows or disallows the passage of tokens through the gateway Split gateway A point where the process flow diverges Have one incoming sequence flow and multiple outgoing sequence flows (representing the branches that diverge) Join gateway A point where the process flow converges Mutually exclusive Only one of them can be true every time the XOR split is reached by a token Exclusive (XOR) split To model the relation between two or more alternative activities, like in the case of the approval or rejection of a claim. Exclusive (XOR) join To merge two or more alternative branches that may have previously been forked with an XOR-split Indicated with an empty diamond or empty diamond marked with an “X” Naming/Label Conventions in BPMN: The label will begin with a verb followed by a noun. The noun may be preceded by an adjective The verb may be followed by a complement to explain how the action is being done.
Database Systems Handbook BY: MUHAMMAD SHARIF 276 The flow of a process with Big Database
Database Systems Handbook BY: MUHAMMAD SHARIF 277
Database Systems Handbook BY: MUHAMMAD SHARIF 278
Database Systems Handbook BY: MUHAMMAD SHARIF 279 CHAPTER 14 RAID STRUCTURE AND MEMORY MANAGEMENT Redundant Arrays of Independent Disks RAID, or “Redundant Arrays of Independent Disks” is a technique that makes use of a combination of multiple disks instead of using a single disk for increased performance, data redundancy, reliability, or both. The term was coined by David Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987. Disk Array: Arrangement of several disks that gives abstraction of a single, large disk. RAID techniques:
Database Systems Handbook BY: MUHAMMAD SHARIF 280 Details of RAID Structure
Database Systems Handbook BY: MUHAMMAD SHARIF 281
Database Systems Handbook BY: MUHAMMAD SHARIF 282 Row farmat and column format in oracle In-memory Structure In memory storage Index
Database Systems Handbook BY: MUHAMMAD SHARIF 283 In memory Compresson in storage Storage manager Components
Database Systems Handbook BY: MUHAMMAD SHARIF 284 Database systems Memory Components 1. CPU Registers s 2. Cache 3. Main memory 4. Flash memory (SSD-solid state disk) (Also known as EEPROM (Electrically Erasable Programmable Read- Only Memory)) 5. Magnetic disk (Hard disks vs. floppy disks) 6. Optical disk (CD-ROM, CD-RW, DVD-RW, and DVD-RAM) 7. Tape storage
Database Systems Handbook BY: MUHAMMAD SHARIF 285
Database Systems Handbook BY: MUHAMMAD SHARIF 286
Database Systems Handbook BY: MUHAMMAD SHARIF 287
Database Systems Handbook BY: MUHAMMAD SHARIF 288 Performance measures of hard disks/ Accessing a Disk Page 1. Access time: the time it takes from when a read or write request is issued to when the data transfer begins. Is composed of: Time to access (read/write) a disk block:  Seek time (moving arms to position disk head on track)  Rotational delay/latency (waiting for the block to rotate under the head)  Data transfer time/rate (moving data to/from disk surface) Seek time and rotational delay dominate.  Seek time varies from about 2 to 15mS  Rotational delay from 0 to 8.3mS (have 4.2mS)  The transfer rate is about 3.5mS per 256Kb page Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions? 2. Data-transfer rate: the rate at which data can be retrieved from or stored on disk (e.g., 25-100 MB/s) 3. Mean time to failure (MTTF): average time the disk is expected to run continuously without any failure BLOCK vs Page vs Sectors
Database Systems Handbook BY: MUHAMMAD SHARIF 289 Block Page Sectors Block is also a sequence of bits and bytes A page is made up of unit blocks or groups of blocks. A sector is a physical spot on a formatted disk that hold a info. A block is made up of a contiguous sequence of sectors from a single track.. No fix size. Pages have fixed sizes, usually 2k or 4k or 8k. Each sector can hold 512 bytes of data A block is also called a physical record on hard drives and floppies Recards that have no fixed size depends on the data types of columns Any data transferred between the hard disk and the RAM is usually sent in blocks . The default NTFS Block size is 4096 bytes. Pages are virtual blocks A disk can read/write a page faster. Each block/page consists of some records. Pages manage data that is stored in RAM. 4 tuples fit in one block if the block size is 2 kb and 30 tuples fit on 1 block if the block size is 8kb. Smallest unit of logical memory, it is used to read a file or write data to a file or physical memory unit called page. A block is virtual memory unit that stores tables rows and records logically in its segments and A page is a physical memory unit that store data physically in disk file A page is loaded into the processor from the main memory. A hard disk plate has many concentric circles on it, called tracks. Every track is further divided into sectors. Page/block: processing with pages is easier/faster than the block It is also called variable length records having complex structure. Fixed length records, inflexible structure in memory. OS prefer page not block but both are storage units. If I insert a new row/record it will come in a block/page if the existing block/page has space. Otherwise, it assigned a new block within the file.
Database Systems Handbook BY: MUHAMMAD SHARIF 290
Database Systems Handbook BY: MUHAMMAD SHARIF 291 Block Diagram depicting paging. Page Map Table(PMT) contains pages from page number 0 to 7 Pinned block: Memory block that is not allowed to be written back to disk. Toss immediate strategy: Frees the space occupied by a block as soon as the final tuple of that block has been processed Example: We can say if we have an employee table and have email, name, CNIC... Empid = 12 bytes, name = 59 bytes, CNIC = 15 bytes.... so all employee table columns are 230 bytes. Its means each row in the employee table have of 230 bytes. So its means we can store around 2 rows in one block. For example, say your hard drive has a block size of 4K, and you have a 4.5K file. This requires 8K to store on your hard drive (2 whole blocks), but only 4.5K on a floppy (9 floppy-size blocks). Example:
Database Systems Handbook BY: MUHAMMAD SHARIF 292 Buffer Manager/Buffer management Buffer: Portion of main memory available to store copies of disk blocks.
Database Systems Handbook BY: MUHAMMAD SHARIF 293 Buffer Manager: Subsystem that is responsible for buffering disk blocks in main memory. The overall goal is to minimize the number of disk accesses. A buffer manager is similar to a virtual memory manager of an operating system.
Database Systems Handbook BY: MUHAMMAD SHARIF 294 Architecture: The buffer manager stages pages from external storage to the main memory buffer pool. File and index layers make calls to the buffer manager. What is the steal approach in DBMS? What are the Buffer Manager Policies/Roles? Data storage on disk?
Database Systems Handbook BY: MUHAMMAD SHARIF 295 Note: Buffer manager moves pages between the main memory buffer pool (volatile memory) from the external storage disk (in non-volatile storage). When execution starts, the file and index layer make the call to the buffer manager. The steal approach is used when the buffer manager replaces an existing page in the cache, that has been updated by a transaction not yet committed, by another page requested by another transaction. No-force. The force rule means that REDO will never be needed during recovery since any committed transaction will have all its updates on disk before it is committed. The deferred update ( NO-UNDO ) recovery scheme a no-steal approach. However, typical database systems employ a steal/no-force strategy. The advantage of steel is that it avoids the need for very large buffer space. Steal/No-Steal Similarly, it would be easy to ensure atomicity with a no-steal policy. The no-steal policy states that pages cannot be evicted from memory (and thus written to disk) until the transaction commits. Need support for undo: removing the effects of an uncommitted transaction on the disk Force/No Force Durability can be a very simple property to ensure if we use a force policy. The force policy states when a transaction executes, force all modified data pages to disk before the transaction commits.
Database Systems Handbook BY: MUHAMMAD SHARIF 296 Preferred Policy: Steal/No-Force This combination is most complicated but allows for the highest flexibility/performance. STEAL (why enforcing Atomicity is hard, complicates enforcing Atomicity) NO FORCE (why enforcing Durability is hard, complicates enforcing Durability) In case of no force Need support for a redo: complete a committed transaction’s writes on disk. Disk Access File: A file is logically a sequence of records, where a record is a sequence of fields; The buffer manager stages pages from external storage to the main memory buffer pool. File and index layers make calls to the buffer manager. The hard disk is also called secondary memory. Which is used to store data permanently. This is non-volatile File scans can be made fast with read-ahead (track-at-a-crack). Requires contiguous file allocation, so may need to bypass OS/file system. Sorted files: records are sorted by search key. Good for equality and range search. Hashed files: records are grouped into buckets by search key. Good for equality search. Disks: Can retrieve random page at a fixed cost Tapes: Can only read pages sequentially Database tables and indexes may be stored on a disk in one of some forms, including ordered/unordered flat files, ISAM, heap files, hash buckets, or B+ trees. The most used forms are B-trees and ISAM.
Database Systems Handbook BY: MUHAMMAD SHARIF 297 Data on a hard disk is stored in microscopic areas called magnetic domains on the magnetic material. Each domain stores either 1 or 0 values. When the computer is switched off, then the head is lifted to a safe zone normally termed a safe parking zone to prevent the head from scratching against the data zone on a platter when the air bearing subsides. This process is called parking. The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for backups whereas, the magnetic disk is used as secondary storage. Dynamic Storage-Allocation Problem/Algorithms Memory allocation is a process by which computer programs are assigned memory or space. It is of four types: First Fit Allocation The first hole that is big enough is allocated to the program. In this type fit, the partition is allocated, which is the first sufficient block from the beginning of the main memory. Best Fit Allocation The smallest hole that is big enough is allocated to the program. It allocates the process to the partition that is the first smallest partition among the free partitions. Worst Fit Allocation The largest hole that is big enough is allocated to the program. It allocates the process to the partition, which is the largest sufficient freely available partition in the main memory. Next Fit allocation: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the last allocation point. Note: First-fit and best-fit better than worst-fit in terms of speed and storage utilization Static and Dynamic Loading: To load a process into the main memory is done by a loader. There are two different types of loading : Static loading:- loading the entire program into a fixed address. It requires more memory space. Dynamic loading:- The entire program and all data of a process must be in physical memory for the process to execute. So, the size of a process is limited to the size of physical memory. Methods Involved in Memory Management There are various methods and with their help Memory Management can be done intelligently by the Operating System:  Fragmentation As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remain unused. This problem is known as Fragmentation. Fragmentation Category −
Database Systems Handbook BY: MUHAMMAD SHARIF 298 1. External fragmentation Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used. 2. Internal fragmentation The memory block assigned to the process is bigger. Some portion of memory is left unused, as it cannot be used by another process. Two types of fragmentation are possible 1. Horizontal fragmentation 2. Vertical Fragmentation Reconstruction of Hybrid Fragmentation The original relation in hybrid fragmentation is reconstructed by performing union and full outer join. 3. Hybrid fragmentation can be achieved by performing horizontal and vertical partitions together. 4. Mixed fragmentation is a group of rows and columns in relation.
Database Systems Handbook BY: MUHAMMAD SHARIF 299 Reduce external fragmentation by compaction ● Shuffle memory contents to place all free memory together in one large block ● Compaction is possible only if relocation is dynamic, and is done at execution time ● I/O problem - Latch job in memory while it is involved in I/O - Do I/O only into OS buffers  Segmentation Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is a different logical address space of the program or A segment is a logical unit. Segmentation with Paging Both paging and segmentation have their advantages and disadvantages, it is better to combine these two schemes to improve on each. The combined scheme is known as 'Page the Elements'. Each segment in this scheme is divided into pages and each segment is maintained in a page table. So the logical address is divided into the following 3 parts: Segment numbers(S) Page number (P) The displacement or offset number (D)
Database Systems Handbook BY: MUHAMMAD SHARIF 300 As shown in the following diagram, the Intel 386 uses segmentation with paging for memory management with a two-level paging scheme
Database Systems Handbook BY: MUHAMMAD SHARIF 301  Swapping Swapping is a mechanism in which a process can be swapped temporarily out of the main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to the main memory. Though performance is usually affected by the swapping process it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction. Note: Bring a page into memory only when it is needed. The same page may be brought into memory several times  Paging A page is also a unit of data storage. A page is loaded into the processor from the main memory. A page is made up of unit blocks or groups of blocks. Pages have fixed sizes, usually 2k or 4k. A page is also called a virtual page or memory page. When the transfer of pages occurs between main memory and secondary memory it is known as paging. Paging is a memory management technique in which process address space is broken into blocks of the same size called pages (size is the power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the number of pages. Divide logical memory into blocks of the same size called pages. Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external fragmentation. Divide physical memory into fixed-sized blocks called frames (size is the power of 2, between 512 bytes and 8192 bytes) The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for backups whereas, the magnetic disk is used as secondary storage.
Database Systems Handbook BY: MUHAMMAD SHARIF 302 Hard disk stores information in the form of magnetic fields. Data is stored digitally in the form of tiny magnetized regions on the platter where each region represents a bit. Microsoft SQL Server databases are stored on disk in two files: a data file and a log file Note: To run a program of size n pages, need to find n free frames and load the program Implementation of Page Table The page table is kept in the main memory  Page-table base register (PTBR) points to the page table  Page-table length register (PRLR) indicates the size of the page table In this scheme, every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problems can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) The flow of Tasks in memory The program must be brought into memory and placed within a process for it to be run.
Database Systems Handbook BY: MUHAMMAD SHARIF 303 Collection of processes on the disk that are waiting to be brought into memory to run the program. Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location knew a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). Multistep Processing of a User Program In memory is as follows: The concept of a logical address space that is bound to separate physical address space is central to proper memory management Logical address – generated by the CPU; also referred to as virtual address Physical address – address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in the execution-time address-binding scheme The user program deals with logical addresses; it never sees the real physical addresses The logical address space of a process can be noncontiguous; the process is allocated physical memory whenever the latter is available
Database Systems Handbook BY: MUHAMMAD SHARIF 304 Address Translation Architecture END
Database Systems Handbook BY: MUHAMMAD SHARIF 305 CHAPTER 15 ORACLE DATABASE FUNDAMENTAL AND ITS ADMINISTRATION Oracle Database History I will use Oracle tool in this book. Oracle Versions and Its meaning 1. Oracle Database 11g 2. Oracle Database 12c 3. Oracle 18c (new name) = Oracle Database 12c Release 2 12.2.0.2 (Patch Set for 12c Release 2). 4. Oracle 19c (new name) = Oracle Database 12c Release 2 12.2.0.3 (Terminal Patch Set for Release Oracle releases in oracle history. Tools/utilities for administoring database Oracle dba  Oracle Universal Installer (Utility that install oracle software, it start O-DBCA to install oracle softwar)  Oracle DBCA (Utility, it create database from templates, it also enable to to create ODB from seed database)  Database Upgrade Assistant (tool, upgrade as Oracle newest release)  Net Configuration Assistant (NETCA as short, tool, enable to configure listener)  Oracle enterprise manager database control(Product, control database by web-based interface, performance advisors)
Database Systems Handbook BY: MUHAMMAD SHARIF 306 Oracle DB editions are hierarchically broken down as follows: Enterprise Edition: Offers all features, including superior performance and security, and is the most robust Personal Edition: Nearly the same as the Enterprise Edition, except it does not include the Oracle Real Application Clusters option Standard Edition: Contains base functionality for users that do not require Enterprise Edition’s robust package Express Edition (XE): The lightweight, free and limited Windows, and Linux edition Oracle Lite: For mobile devices Database Instance/ Oracle Instance A Database Instance is an interface between client applications (users) and the database. An Oracle instance consists of three main parts: System Global Area (SGA), Program Global Area (PGA), and background processes. Searches for a server parameter file in a platform-specific default location and, if not found, for a text initialization parameter file (specifying STARTUP with the SPFILE or PFILE parameters overrides the default behavior) Reads the parameter file to determine the values of initialization parameters. Allocates the SGA based on the initialization parameter settings. Starts the Oracle background processes. Opens the alert log and trace files and writes all explicit parameter settings to the alert log in valid parameter syntax
Database Systems Handbook BY: MUHAMMAD SHARIF 307 Oracle Database creates server processes to handle the requests of user processes connected to an instance. A server process can be either of the following: A dedicated server process, which services only one user process. A shared server process, which can service multiple user processes. We can see the listener has the default name of "LISTENER" and is listening for TCP connections on port 1521. The listener process is started when the server is started (or whenever the instance is started). The listener is only required for connections from other servers, and the DBA performs the creation of the listener process. When a new connection comes in over the network, the listener passes the connection to Oracle.
Database Systems Handbook BY: MUHAMMAD SHARIF 308
Database Systems Handbook BY: MUHAMMAD SHARIF 309 MainDatabase shutting down conditions Shutdown Normal | Transactional | Immediate | Abort Database startup conditions: Startup restrict | Startup mount restrict | Startup force |Startup nomount |Startup mount | Open Read only modes: Alter database open read-only Alter database open; Details of shutting down conditions: Shutdown /shut/shutdown normal: 1. New connections are not allowed 2. Connected users can perform an ongoing transaction 3. Idle sessions will not be disconnected 4. When connected users log out manually then the database gets shut down. 5. It is also a graceful shutdown, So it doesn’t require ICR in the next startup. 6. A common scn number will be updated to control files and data files before the database shutdown. Shutdown Transnational: 1. New connections are not allowed 2. Connected users can perform an ongoing transaction 3. Idle sessions will be disconnected 4. The database gets shutdown once ongoing tx’s get completed(commit/rollback) Hence, It is also a graceful shutdown, So it doesn’t require ICR in the next startup. Shutdown immediate: 1. New connections are not allowed
Database Systems Handbook BY: MUHAMMAD SHARIF 310 2. Connected uses can’t perform an ongoing transaction 3. Idle sessions will be disconnected 4. Oracle performs rollback’s the ongoing Tx’s(uncommitted) and the database gets shutdown. 5. A common scn number will be updated to control files and data files before the database shutdown. Hence, It is also a graceful shutdown, So it doesn’t require ICR in the next startup. Shutdown Abort: 1. New connections are not allowed 2. Connected uses can’t perform an ongoing transaction 3. Idle sessions will be disconnected 4. Db gets shutdown abruptly (NO Commit /No Rollback) Hence, It is an abrupt shutdown, So it requires ICR in the next startup.
Database Systems Handbook BY: MUHAMMAD SHARIF 311
Database Systems Handbook BY: MUHAMMAD SHARIF 312
Database Systems Handbook BY: MUHAMMAD SHARIF 313 Types of Standby Databases 1. Physical Standby Database 2. Snapshot Standby Database 3. Logical Standby Database Physical Standby Database A physical standby database is physically identical to the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The physical standby database is updated by performing recovery using redo data that is received from the primary database. Oracle Database12c enables a physical standby database to receive and apply redo while it is open in read-only mode. Logical Standby Database A logical standby database contains the same logical information (unless configured to skip certain objects) as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database by transforming the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database. This is done with the use of LogMiner technology on the redo data received from the primary database. The tables in a logical standby database can be used simultaneously for recovery and other tasks such as reporting, summations, and queries. A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. A standby database is a database replica created from a backup of a primary database. By applying archived redo logs from the primary database to the standby database, you can keep the two databases synchronized. A standby database has the following main purposes: 1. Disaster protection 2. Protection against data corruption
Database Systems Handbook BY: MUHAMMAD SHARIF 314 Snapshot Standby Database A snapshot standby database is a database that is created by converting a physical standby database into a snapshot standby database. The snapshot standby database receives redo from the primary database but does not apply the redo data until it is converted back into a physical standby database. The snapshot standby database can be used
Database Systems Handbook BY: MUHAMMAD SHARIF 315 for updates, but those updates are discarded before the snapshot standby database is converted back into a physical standby database. The snapshot standby database is appropriate when you require a temporary, updatable version of a physical standby database. What is Cloning? Database Cloning is a procedure that can be used to create an identical copy of the existing Oracle database. DBAs occasionally need to clone databases to test backup and recovery strategies or export a table that was dropped from the production database and import it back into the production databases. Cloning can be done on a different host or the same host even if it is different from the standby database. Database Cloning can be done using the following methods, Cold Cloning Hot Cloning RMAN Cloning The basic memory structures associated with Oracle Database include: System global area (SGA) The SGA is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. All server and background processes share the SGA. Examples of data stored in the SGA include cached data blocks and shared SQL areas. Program global area (PGA) A PGA is a nonshared memory region that contains data and control information exclusively for use by an Oracle process. Oracle Database creates the PGA when an Oracle process starts. One PGA exists for each server process and background process. The collection of individual PGAs is the total instance PGA or instance PGA. Database initialization parameters set the size of the instance PGA, not individual PGAs. User global area (UGA) The UGA is memory associated with a user session. Software code areas Software code areas are portions of memory used to store code that is being run or can be run. Oracle Database code is stored in a software area that is typically at a different location from user programs—a more exclusive or protected location. Oracle Initialization Parameter
Database Systems Handbook BY: MUHAMMAD SHARIF 316
Database Systems Handbook BY: MUHAMMAD SHARIF 317
Database Systems Handbook BY: MUHAMMAD SHARIF 318
Database Systems Handbook BY: MUHAMMAD SHARIF 319
Database Systems Handbook BY: MUHAMMAD SHARIF 320 Oracle Database Logical Storage Structure Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks, extents, and segments. The Relationships Among Segments, Extents, Data Blocks in the data file, Oracle block, and OS block:
Database Systems Handbook BY: MUHAMMAD SHARIF 321 Oracle Block: At the finest level of granularity, Oracle stores data in data blocks (also called logical blocks, Oracle blocks, or pages). One data block corresponds to a specific number of bytes of physical database space on a disk. Oracle Extent: The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. It can be spared over two tablespaces. Oracle Segment: The level of logical database storage greater than an extent is called a segment. A segment is a set of extents, each of which has been allocated for a specific data structure and all of which are stored in the same tablespace. For example, each table's data is stored in its data segment, while each index's data is stored in its index segment. If the table or index is partitioned, each partition is stored in its segment.
Database Systems Handbook BY: MUHAMMAD SHARIF 322
Database Systems Handbook BY: MUHAMMAD SHARIF 323 Data block: Oracle manages the storage space in the data files of a database in units called data blocks. A data block is the smallest unit of data used by a database. Oracle block and data block are equal in data storage by logical and physical respectively like table's (logical) data is stored in its data segment. The high water mark is the boundary between used and unused space in a segment. Operating system block: The data consisting of the data block in the data files are stored in operating system blocks. OS Page: The smallest unit of storage that can be atomically written to non-volatile storage is called a page Details of Data storage in Oracle Blocks: An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In the Figure above, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks. A segment is a set of extents allocated for a specific database object, such as a table. For example, the data for the employee's table is stored in its data segment, whereas each index for employees is stored in its index segment. Every database object that consumes storage consists of a single segment.
Database Systems Handbook BY: MUHAMMAD SHARIF 324
Database Systems Handbook BY: MUHAMMAD SHARIF 325
Database Systems Handbook BY: MUHAMMAD SHARIF 326 A big file tablespace eases database administration because it consists of only one data file. The a single data file can be up to 128TB (terabytes) in size if the tablespace block size is 32KB; if you use the more common 8KB block size, 32TB is the maximum size of a big file tablespace.
Database Systems Handbook BY: MUHAMMAD SHARIF 327 Broad View of Logical and Physical Structure of Database System in Oracle. Oracle Database must use logical space management to track and allocate the extents in a tablespace. When a database object requires an extent, the database must have a method of finding and providing it. Similarly, when an object no longer requires an extent, the database must have a method of making the free extent available. Oracle Database manages space within a tablespace based on the type that you create.
Database Systems Handbook BY: MUHAMMAD SHARIF 328 You can create either of the following types of tablespaces: Locally managed tablespaces (default) The database uses bitmaps in the tablespaces themselves to manage extents. Thus, locally managed tablespaces have a part of the tablespace set aside for a bitmap. Within a tablespace, the database can manage segments with automatic segment space management (ASSM) or manual segment space management (MSSM). Dictionary-managed tablespaces The database uses the data dictionary to manage the exten. Oracle Physical Storage Structure Oracle Database Memory Management Memory management involves maintaining optimal sizes for the Oracle instance memory structures as demands on the database change. Oracle Database manages memory based on the settings of memory-related initialization parameters. The basic options for memory management are as follows: Automatic memory management You specify the target size for the database instance memory. The instance automatically tunes to the target memory size, redistributing memory as needed between the SGA and the instance PGA.
Database Systems Handbook BY: MUHAMMAD SHARIF 329 Automatically shared memory management This management model is partially automated. You set a target size for the SGA and then have the option of setting an aggregate target size for the PGA or managing PGA work areas individually. Manual memory management Instead of setting the total memory size, you set many initialization parameters to manage components of the SGA and instance PGA individually. SGA (System Global Area) is an area of memory (RAM) allocated when an Oracle Instance starts up. The SGA's size and function are controlled by initialization (INIT.ORA or SPFILE) parameters. In general, the SGA consists of the following subcomponents, as can be verified by querying the V$SGAINFO: SELECT FROM v$sgainfo; The common components are: Data buffer cache - cache data and index blocks for faster access. Shared pool - cache parsed SQL and PL/SQL statements. Dictionary Cache - information about data dictionary objects. Redo Log Buffer - committed transactions that are not yet written to the redo log files. JAVA pool - caching parsed Java programs. Streams pool - cache Oracle Streams objects. Large pool - used for backups, UGAs, etc.
Database Systems Handbook BY: MUHAMMAD SHARIF 330 Automatic Shared Memory Management simplifies the configuration of the SGA and is the recommended memory configuration. To use Automatic Shared Memory Management, set the SGA_TARGET initialization parameter to a nonzero value and set the STATISTICS_LEVEL initialization parameter to TYPICAL or ALL. The value of the SGA_TARGET parameter should be set to the amount of memory that you want to dedicate to the SGA. In response to the workload on the system, the automatic SGA management distributes the memory appropriately for the following memory pools: 1. Database buffer cache (default pool) 2. Shared pool 3. Large pool 4. Java pool 5. Streams pool
Database Systems Handbook BY: MUHAMMAD SHARIF 331
Database Systems Handbook BY: MUHAMMAD SHARIF 332 Oracle database Files and ASM FILES COMPARISONS: END
Database Systems Handbook BY: MUHAMMAD SHARIF 333 CHAPTER 16 DATABASE BACKUPS AND RECOVERY, LOGS MANAGEMENT Overview of Backup Solutions in Oracle Several circumstances can halt the operation of an Oracle database.
Database Systems Handbook BY: MUHAMMAD SHARIF 334 There are two ways to perform a data backup in Oracle Backups are divided into physical backups and logical backups. Logical Backups contain logical data (for example, tables and stored procedures) extracted with the Oracle Export utility and stored in a binary file. You can use logical backups to supplement physical backups. Backup sets are logical entities produced by the RMAN BACKUP command. Oracle Recovery Manager (RMAN) It's done by server session (Restore files, Backup data Files, Recover Data files). It's also recommended. A user can log in to RMAN and command it to back up a database. RMAN can write backup sets to disk and tape cold backup (offline database backup). User-managed Backup SQLPlus and OS Commands by starting from the beginning null end; Exporting and Importing Data: SQL Commands Command-line utilities (Logical backup) 1. Data Pump Export and Data Pump Import (These are called Logical backup) 2. Export and Import (These are called Logical backup) Physical backups Physical backups, which are the primary concern in a backup and recovery strategy, are copies of physical database files. You can make physical backups with either the Oracle Recovery Manager (RMAN) utility or operating system utilities. These are copies of physical database files. For example, a physical backup might copy database content from a local disk drive to another secure location. Physical backup Types (cold, hot, full, incremental) During an Oracle tablespace hot backup, you (or your script) puts a tablespace into backup mode, then copy the data files to disk or tape, then take the tablespace out of backup mode. Hot backup - also known as dynamic or online backup, is a backup performed on data while the database is actively online and accessible to users. Cold backup—Users cannot modify the database during a cold backup, so the database and the backup copy are always synchronized. Cold backup is used only when the service level allows for the required system downtime.
Database Systems Handbook BY: MUHAMMAD SHARIF 335 Full—Creates a copy of data that can include parts of a database such as the control file, transaction files (redo logs), tablespaces, archive files, and data files. Regular cold full physical backups are recommended. The database must be in archive log mode for a full physical backup. Incremental—Captures only changes made after the last full physical backup. Incremental backup can be done with a hot backup. Cold-full backup - A cold-full backup is when the database is shut down, all of the physical files are backed up, and the database is started up again. Cold-partial backup - A cold-partial backup is used when a full backup is not possible due to some physical constraints. Hot-full backup - A hot-full backup is one in which the database is not taken off-line during the backup process. Rather, the tablespace and data files are put into a backup state. Hot-partial backup - A hot-partial backup is one in which the database is not taken off-line during the backup process, plus different tablespaces are backed up on different nights. Consistent and Inconsistent Backups A consistent backup is one in which the files being backed up contain all changes up to the same system change number (SCN). This means that the files in the backup contain all the data taken from the same point in time. Unlike an inconsistent backup, a consistent whole database backup does not require recovery after it is restored. An inconsistent backup is a backup of one or more database files that you make while the database is open or after the database has shut down abnormally. Image Backup/mirror backup A full image backup, or mirror backup, is a replica of everything on your computer's hard drive, from the operating system, boot information, apps, and hidden files to your preferences and settings. Imaging software not only captures individual files but everything you need to get your system running again. Image copies are exact byte- for-byte copies of files. RMAN prefers to use an image copy over a backup set.
Database Systems Handbook BY: MUHAMMAD SHARIF 336 Restore Database backup by: If you use SQL*Plus, then you can run the RECOVER command to perform recovery. If you use RMAN, then you run the RMAN RECOVER command to perform recovery. Flashback in Oracle is a set of tools that allow System Administrators and users to view and even manipulate the past state of data without having to recover to a fixed point in time. Using the flashback command, we can pull a table out of the recycle bin. The Flashback is complete; this way, we restore the table. At the physical level, Oracle Flashback Database provides a more efficient data protection alternative to database point-in-time recovery (DBPITR). If the current data files have unwanted changes, then you can use the RMAN command FLASHBACK DATABASE to revert the data files to their contents at a past time. Database Exports/Imports Data Pump Export the HR schema to a dump file named schema.DMP by issuing the following command at the system command prompt: EXPDP SYSTEM/PASSWORD SCHEMAS=HR DIRECTORY=DMPDIR DUMPFILE=SCHEMA.DMP LOGFILE=EXPSCHEMA.LOG IMPDP USER/PASSWORD@DB_NAME DIRECTORY=DATA_PUMP_DIR DUMPFILE=DUMP_NAME.DMP SCHEMAS=EMR FROMUSER=MIS TOUSER=EMR Cash recovery and Log-Based Recovery The log is a sequence of records. The log of each transaction is maintained in some stable storage so that if any failure occurs, then it can be recovered from there.
Database Systems Handbook BY: MUHAMMAD SHARIF 337 Log management and its type Log: An ordered list of REDO/UNDO actions Log record contains: <XID, pageID, offset, length, old data, new data> and additional control info. The fields are: XID: transaction ID - tells us which transaction did this operation pageID: what page has been modified offset: where on the page the data started changing (typically in bytes) length: how much data was changed (typically in bytes) old data: what the data was originally (used for undo operations) new data: what the data has been updated to (used for redo operations) Data item identifier:
Database Systems Handbook BY: MUHAMMAD SHARIF 338
Database Systems Handbook BY: MUHAMMAD SHARIF 339 Checkpoint The checkpoint is like a bookmark. While the execution of the transaction, such checkpoints are marked, and the transaction is executed then using the steps of the transaction, the log files will be created. Checkpoint declares a point before which all the logs are stored permanently in the storage disk and are in an inconsistent state. In the case of crashes, the amount of work and time is saved as the system can restart from the checkpoint. Checkpointing is a quick way to limit the number of logs to scan on recovery.
Database Systems Handbook BY: MUHAMMAD SHARIF 340 Store the LSN of the most recent checkpoint at a master record on a disk
Database Systems Handbook BY: MUHAMMAD SHARIF 341
Database Systems Handbook BY: MUHAMMAD SHARIF 342 System Catalog A repository of information describing the data in the database (metadata, data about data)
Database Systems Handbook BY: MUHAMMAD SHARIF 343 Data Replication Replication is the process of copying and maintaining database objects in multiple databases that make up a distributed database system. Replication can improve the performance and protect the availability of applications because alternate data access options exist.
Database Systems Handbook BY: MUHAMMAD SHARIF 344 Oracle provides its own set of tools to replicate Oracle and integrate it with other databases. In this post, you will explore the tools provided by Oracle as well as open-source tools that can be used for Oracle database replication by implementing custom code. The catalog is needed to keep track of the location of each fragment & replica Data replication techniques Synchronous vs. asynchronous Synchronous: all replicas are up-to-date Asynchronous: cheaper but delay in synchronization Regarding the timing of data transfer, there are two types of data replication: Asynchronous replication is when the data is sent to the model server -- the server where the replicas take data from the client. Then, the model server pings the client with a confirmation saying the data has been received. From there, it goes about copying data to the replicas at an unspecified or monitored pace. Synchronous replication is when data is copied from the client-server to the model server and then replicated to all the replica servers before the client is notified that data has been replicated. This takes longer to verify than the asynchronous method, but it presents the advantage of knowing that all data was copied before proceeding. Asynchronous database replication offers flexibility and ease of use, as replications happen in the background. Methods to Setup Oracle Database Replication You can easily set up the Oracle Database Replication using the following methods: Method 1: Oracle Database Replication Using Hevo Data Method 2: Oracle Database Replication Using A Full Backup And Load Approach Method 3: Oracle Database Replication Using a Trigger-Based Approach Method 4: Oracle Database Replication Using Oracle Golden Gate CDC Method 5: Oracle Database Replication Using Custom Script-Based on Binary Log Oracle types of data replication and integration in OLAP Three main architectures: Consolidation database: All data is moved into a single database and managed from a central location. Oracle Real Application Clusters (Oracle RAC), Grid computing, and Virtual Private Database (VPD) can help you consolidate information into a single database that is highly available, scalable, and secure. Federation: Data appears to be integrated into a single virtual database while remaining in its current distributed locations. Distributed queries, distributed SQL, and Oracle Database Gateway can help you create a federated database. Sharing Mediation: Multiple copies of the same information are maintained in multiple databases and application data stores. Data replication and messaging can help you share information at multiple databases.
Database Systems Handbook BY: MUHAMMAD SHARIF 345 END
Database Systems Handbook BY: MUHAMMAD SHARIF 346 CHAPTER 17 PREREQUISITES OF STORAGE MANAGEMENT AND ORACLE INSTALLATION Overview of Hardware Requirements Hardware requirements you must meet before installing Oracle Management Service (OMS), a standalone Oracle Management Agent (Management Agent), and Oracle Management Repository (Management Repository). Physical memory (RAM)=> 256 MB minimum; 512 MB recommended, On Windows Vista, the minimum requirement is 512 MB Virtual memory=> Double the amount of RAM Disk space=> Basic Installation Type total: 2.04 GB, advanced Installation Types total: 1.94 GB Video adapter=> 256 colors Processor=> 550 MHz minimum, On Windows Vista, the minimum requirement is 800 MHz In particular, here I will discuss the following: 1. CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS 2. CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent 3. CPU, RAM, and Hard Disk Space Requirements for Management Repository CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS Host Small Medium Large CPU Cores/Host 2 4 8 RAM 4 GB 6 GB 8 GB RAM with ADPFoot 1 , JVMDFoot 2 6GB 10 GB 14 GB Oracle WebLogic Server JVM Heap Size 512 MB 1 GB 2 GB Hard Disk Space 7 GB 7 GB 7 GB Hard Disk Space with ADP, JVMD 10 GB 12 GB 14 GB Note: While installing an additional OMS (by cloning an existing one), if you have installed BI publisher on the source host, then ensure that you have 7 GB of additional hard disk space on the destination host, so a total of 14 GB. CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent For a standalone Oracle Management Agent, ensure that you have 2 CPU cores per host, 512 MB of RAM, and 1 GB of hard disk space. CPU, RAM, and Hard Disk Space Requirements for Management Repository In this table RAM and Hard Disk Space Requirements for Management Repository Host Small Medium Large CPU Cores/HostFoot 1 2 4 8 RAM 4 GB 6 GB 8 GB Hard Disk Space 50 GB 200 GB 400 GB Oracle database Hardware Component Requirements for Windows x64 The following table lists the hardware components that are required for Oracle Database on Windows x64. Windows x64 Minimum Hardware Requirements
Database Systems Handbook BY: MUHAMMAD SHARIF 347 Requirement Value System Architecture Processor: AMD64 and Intel EM64T Physical memory (RAM) 2 GB minimum Virtual memory (swap) If physical memory is between 2 GB and 16 GB, then set virtual memory to 1 times the size of the RAM If physical memory is more than 16 GB, then set virtual memory to 16 GB Disk space Typical Install Type total: 10 GB Advanced Install Types total: 10 GB Video adapter 256 colors Screen Resolution 1024 X 768 minimum Windows x64 Minimum Disk Space Requirements on NTFS Installation Type TEMP Space SYSTEM_DRIVE:Program FilesOracleInventory Oracle Home Data Files * Total Enterprise Edition 595 MB 4.55 MB 6.00 GB 4.38 GB ** 10.38 GB ** Standard Edition 2 595 MB 4.55 MB 5.50 GB 4.24 GB ** 9.74 GB ** * Refers to the contents of the admin, cfgtoollogs, flash_recovery_area, and oradata directories in the ORACLE_BASE directory. Memory Requirements for Installing Oracle Fusion Middleware Operating System Minimum Physical Memory Required Minimum Available Memory Required Linux 4 GB 8 GB UNIX 4 GB 8 GB Windows 4 GB 8 GB
Database Systems Handbook BY: MUHAMMAD SHARIF 348 Calculations for No-Compression Databases To calculate database size when the compression option is none, use the formula: Number of blocks * (72 bytes + size of expanded data block) Calculations for Compressed Databases Because the compression method used can vary per block, the following calculation formulas are general estimates of the database size. Actual implementation could result in numbers larger or smaller than the calculations. 1. Bitmap Compression 2. Index-Value Compression 3. RLE Compression 4. zlib Compression 5. Index Files The minimum size for the index is 8,216,576 bytes (8 MB). To calculate the size of a database index, including all index files, perform the following calculation: number of existing blocks * 112 bytes = the size of database index About Calculating Database Limits Use the size guidelines in this section to calculate Oracle Database limits. Block Size Guidelines Type Size Maximum block size 16,384 bytes or 16 kilobytes (KB) Minimum block size 2 kilobytes (KB) Maximum blocks for each file 4,194,304 blocks Maximum possible file size with 16 K sized blocks 64 Gigabytes (GB) (4,194,304 * 16,384) = 64 gigabytes (GB) Maximum Number of Files for Each Database Block Size Number of Files 2 KB 20,000 4 KB 40,000 8 KB 65,536 16 KB 65,536
Database Systems Handbook BY: MUHAMMAD SHARIF 349 Maximum File Sizes Type Size Maximum file size for a FAT file 4 GB Maximum file size in NTFS 16 Exabytes (EB) Maximum database size 65,536 * 64 GB equals approximately 4 Petabytes (PB) Maximum control file size 20,000 blocks Data Block Format Every data block has a format or internal structure that enables the database to track the data and free space in the block. This format is similar whether the data block contains table, index, or table cluster data. Oracle Database installation steps 12c before installation In this section, you will be installing the Oracle Database and creating an Oracle Home User account. Here OUI is used to install Oracle Software 1. Expand the database folder that you extracted in the previous section. Double-click setup. 2. Click Yes in the User Account Control window to continue with the installation.
Database Systems Handbook BY: MUHAMMAD SHARIF 350 3. The Configure Security Updates window appears. Enter your email address and My Oracle Support password to receive security issue notifications via email. If you do not wish to receive notifications via email, deselect. Select "Skip software updates" if do not want to apply any updates. Accept the default and click Next. 4. The Select Installation Option window appears with the following options: Select "Create and configure a database" to install the database, create database instance and configure the database. Select "Install database software only" to only install the database software. Select "Upgrade an existing database" to upgrade the database that is already installed. In this OBE, we create and configure the database. Select the Create and configure a database option and click Next. 5. The System Class window appears. Select Desktop Class or Server Class depending on the type of system you are using. In this OBE, we will perform the installation on a desktop/laptop. Select Desktop class and click Next. 6. The Oracle Home User Selection window appears. Starting with Oracle Database 12c Release 1 (12.1), Oracle Database on Microsoft Windows supports the use of an Oracle Home User, specified at the time of installation. This Oracle Home User is used to run the Windows services for a Oracle Home, and is similar to the Oracle User on Oracle Database on Linux. This user is associated with an Oracle Home and cannot be changed to a different user post installation. Note: Different Oracle homes on a system can share the same Oracle Home User or use different Oracle Home Users. The Oracle Home User is different from an Oracle Installation User. The Oracle Installation User is the user who requires administrative privileges to install Oracle products. The Oracle Home User is used to run the Windows services for the Oracle Home. The window provides the following options: 1. If you select "Use Existing Windows User", the user credentials provided must be a standard Windows user account (not an administrator). 2. If this is a single instance database installation, the user can be a local user, a domain user, or a managed services account. 3. If this is an Oracle RAC database installation, the existing user must be a Windows domain user. The Oracle installer will display an error if this user has administrator privileges. 4. If you select "Create New Windows User", the Oracle installer will create a new standard Windows user account. This user will be assigned as the Oracle Home User. Please note that this user will not have login privileges. This option is not available for an Oracle RAC Database installation. 5. If you select "Use Windows Built-in Account", the system uses the Windows Built-in account as the Oracle Home User. Select the Create New Windows User option. Enter the user name as OracleHomeUser1 and password as Welcome1. Click Next. Note: Remember the Windows User password. It will be required later to administer or manage database services. 7. The Typical Install Configuration window appears. Click on a text field and then the balloon icon ( )to know more about the field. Note that by default, the installer creates a container database along with a pluggable database called "pdborcl". The pluggable database contains the sample HR schema. 8. Change the Global database name to orcl. Enter the “Administrative password” as Oracle_1. This password will be used later to log into administrator accounts such as SYS and SYSTEM. Click Next. 9. The prerequisite checks are performed and a Summary window appears. Review the settings and click Install.
Database Systems Handbook BY: MUHAMMAD SHARIF 351 Note: Depending on your firewall settings, you may need to grant permissions to allow java to access the network. 10. The progress window appears. 11. The Database Configuration Assistant started and creates your the database. 12. After the Database Configuration Assistant creates the database, you can navigate to https://localhost:5500/em as a SYS user to manage the database using Enterprise Manager Database Express. You can click “Password Management…” to unlock accounts. Click OK to continue. 13. The Finish window appears. Click Close to exit the Oracle Universal Installer. 14. To verify the installation Navigate to C:Windowssystem32 using Windows Explorer. Double-click services. The Services window appears, displaying a list of services. Note: In advance installation step you allocate memory like
Database Systems Handbook BY: MUHAMMAD SHARIF 352
Database Systems Handbook BY: MUHAMMAD SHARIF 353 CHAPTER 18 ORACLE DATABASE APPLICATIONS DEVELOPMENT USING ORACLE APPLICATION EXPRESS Overview APEX, History, Apex architecture and Manage Utility The database manufacturer Oracle, is well-known for its relational database system “Oracle Database” which provides many efficient features to read and write large amounts of data. To cope with the growing demand of developing web applications very fast, Oracle has created the online development environment “Oracle APEX”. The creators of Oracle Application Express say it can help you develop enterprise apps up to 20 times faster and with 100 times less code There is no need to spend time on the GUI at the very beginning. Thus, the developer can directly start with implementing the business logic. This is the reason why Oracle APEX is feasible to create rapid GUI-Prototypes without logic. Thus, prospective customers can get an idea of how their future application will look. Oracle APEX – an extremely powerful tool As you can see, Oracle APEX is an extremely powerful tool that allows you to easily create simple-to-powerful apps, and gives you a lot of control over their functions and appearance. You have many different components available, like charts, different types of reports, mobile layouts, REST Web Services, faceted search, card regions, and many more. And the cool thing is, it’s going to get even better with time. Oracle’s roadmap for the technology is extensive and mentions things such as:
Database Systems Handbook BY: MUHAMMAD SHARIF 354  Runtime application customization  More analytics  Machine Learning  Process modeling  Support for MySQL  Native map component (you’ll be able to create a map like those you saw in these Covid-19 apps I mentioned natively – right now you have to use additional tools for that, like JavaScript or a map plug-in).  Oracle JET-based components (JavaScript Extension Toolkit – it’s definitely not low-code, but it’s got nice data visualizations)  Expanded capabilities in APEX Service Cloud Console  REST Service Catalog (I had to google around for the one I used, but in the future, you’ll have a catalog of freely available options to choose)  Integration with developer lifecycle services  Improved printing and PDF export capabilities As you can see, there’s a lot of things that are worth waiting for. Oracle APEX is going to get a lot more powerful, and that’s even more of a reason to get to know it and start using it. Distinguishing Characteristics and Apex Data Sources
Database Systems Handbook BY: MUHAMMAD SHARIF 355 Apex history APEX is a very powerful development tool, which is used to create web-based database-centric applications. The tool itself consists of a schema in the database with a lot of tables, views, and PL/SQL code. It’s available for every edition of the database. The techniques that are used with this tool are PL/SQL, HTML, CSS, and JavaScript. Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal and disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that generate the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this approach APEX is amazingly fast. APEX became available to the public in 2004 and then it was part of version 10g of the database. At that time it was called HTMLDB and the first version was 1.5. Before HTMLDB, it was called Oracle Flows, Oracle Platform, and Project Marvel.
Database Systems Handbook BY: MUHAMMAD SHARIF 356 Note: Starting with Oracle Database 12c Release 2 (12.2), Oracle Application Express is included in the Oracle Home on disk and is no longer installed by default in the database. Oracle Application Express is included with the following Oracle Database releases: Oracle Database 19c – Oracle Application Express Release 18.1. Oracle Database 18c – Oracle Application Express Release 5.1. Oracle Database 12c Release 2 (12.2)- Oracle Application Express Release 5.0. Oracle Database 12c Release 1 (12.1) – Oracle Application Express Release 4.2. Oracle Database 11g Release 2 (11.2) – Oracle Application Express Release 3.2. Oracle Database 11g Release 1 (11.1) – Oracle Application Express Release 3.0. The Oracle Database releases less frequently than Oracle Application Express. Therefore, Oracle recommends updating to the latest Oracle Application Express release available on Oracle Technology Network. Within each application, you can also specify a Compatibility Mode in the Application Definition. The Compatibility Mode attribute controls the compatibility mode of the Application Express runtime engine. Compatibility Mode options include Pre 4.1, 4.1, 4.2, 5.0, 5.1/18.1, 18.2, 19.1, and 19.2. or upper versions. Most recent Oracle APEX releases Version 22 This release of Oracle APEX introduces Approvals and the Unified Task List, Simplified Create Page wizards, Readable Application Export formats, and Data Generator. APEX 22.1 also brings several enhancements existing components, such as tokenized row search, an easy way to sort regions, improvements to faceted search, additional customization of the PWA service worker, a more streamlined developer experience, and much more! Version 21 This release of Oracle APEX introduces Smart Filters, Progressive Web Apps, and REST Service Catalogs. APEX 21.2 also brings greater UI flexibility with Universal Theme, new and updated page components, numerous improvements to the developer experience, and a whole lot more! Especially now Oracle has pointed out APEX as one of the important tools for building applications in their Oracle Database Cloud Service, this interest will only grow. APEX shared a lot of the characteristics of cloud computing, even before cloud computing became popular. These characteristics include:  Elasticity  Browser-based development and runtime  RESTful web services (REST stands for Representational State Transfer)
Database Systems Handbook BY: MUHAMMAD SHARIF 357
Database Systems Handbook BY: MUHAMMAD SHARIF 358 Oracle Database architecture. Because the database is doing all the hard work, the architecture is fairly simple. We only have to add a web server. We can choose one of the following web servers:  Oracle HTTP Server (OHS)  Embedded PL/SQL Gateway (EPG)  APEX Listener Oracle APEX has a strong history, starting with version 1.5, which came out in 2004 – it was known as HTML DB then (before it also had other names, like Flows and Project Marvel). Oracle APEX is a part of the Oracle RAD architecture and technology stack. What does it mean? “R” stands for REST, or rather ORDS – Oracle REST Data Services. ORDS is responsible for asking the database for the page and rendering it back to the client; “A” stands for APEX, Oracle Application Express, the topic of this article; “D” stands for Database, which is the place an APEX application resides in.
Database Systems Handbook BY: MUHAMMAD SHARIF 359 Other methodologies that work well with Oracle Application Express include: Spiral - This approach is actually a series of short waterfall cycles. Each waterfall cycle yields new requirements and enables the development team to create a robust series of prototypes. Rapid application development (RAD) life cycle - This approach has a heavy emphasis on creating a prototype that closely resembles the final product. The prototype is an essential part of the requirements phase. One disadvantage of this model is that the emphasis on creating the prototype can cause scope creep; developers can lose sight of their initial goals in the attempt to create the perfect application.
Database Systems Handbook BY: MUHAMMAD SHARIF 360 These include Oauth client, APEX User, Database Schema User, and OS User. While it is important to ensure your ORDS web services are secured, you also need to consider what a client has access to once authenticate. As a quick
Database Systems Handbook BY: MUHAMMAD SHARIF 361 reminder, authentication confirms your identity and allows you into the system, authorization decides what you can do once you are in. Oracle REST Data Services is a Java EE-based alternative for Oracle HTTP Server and mod_plsql. The Java EE implementation offers increased functionality including a command-line based configuration, enhanced security, file caching, and RESTful web services. Oracle REST Data Services also provides increased flexibility by supporting deployments using Oracle WebLogic Server, GlassFish Server, Apache Tomcat, and a standalone mode. The Oracle Application Express architecture requires some form of the webserver to proxy requests between a web browser and the Oracle Application Express engine. Oracle REST Data Services satisfies this need but its use goes beyond that of Oracle Application Express configurations. Oracle REST Data Services simplifies the deployment process because there is no Oracle home required, as connectivity is provided using an embedded JDBC driver. Oracle REST Data Services is a Java Enterprise Edition (Java EE) based data service that provides enhanced security, file caching features, and RESTful Web Services. Oracle REST Data Services also increases flexibility through support for deployment in standalone mode, as well as using servers like Oracle WebLogic Server and Apache Tomcat. ORDS ORDS, a Java-based application, enables developers with SQL and database skills to develop REST APIs for Oracle Database. You can deploy ORDS on web and application servers, including WebLogic®, Tomcat®, and Glassfish®, as shown in the following image: ORDS is our middle tier JAVA application that allows you to access your Oracle Database resources via REST APIs. Use standard HTTP(s) calls (GET|POST|PUT|DELETE) via URIs that ORDS makes available (/ords/database123/user3/module5/something/) ORDS will route your request to the appropriate database, and call the appropriate query or PL/SQL anonymous block), and return the output and HTTP codes.
Database Systems Handbook BY: MUHAMMAD SHARIF 362 For most calls, that’s going to be the results of a SQL statement – paginated and formatted as JSON.
Database Systems Handbook BY: MUHAMMAD SHARIF 363
Database Systems Handbook BY: MUHAMMAD SHARIF 364 Oracle Cloud You can run APEX in an Autonomous Database (ADB) – an elastic database that you can scale up. It’s self-driving, self-healing, and can repair and upgrade itself. It comes in two flavours:
Database Systems Handbook BY: MUHAMMAD SHARIF 365 1. Autonomous Transaction Processing (ATP) – basically transaction processing, it’s where APEX sees most use; 2. Autonomous Data Warehouse (ADW) – for more query-driven APEX applications. Reporting data is also a common use of Oracle APEX. You can also use the new Database Cloud Service (DCS) – an APEX-only solution. For a fee, you can have a commercial application running on a database cloud service. On-premise or Private Cloud You can also run Oracle APEX on-premise or in a Private Cloud – anywhere where a database runs. It can be a physical, dedicated server, a virtualized machine, a docker image (you can run it on your laptop, fire it up on a train or a plane – it’s very popular among Oracle Application Express developers). You can also use it on Exadata – a super-powerful APEX physical server on cloud services. Oracle Utility(Locking pages, apps, workspaces)
Database Systems Handbook BY: MUHAMMAD SHARIF 366 Workspace utility Application Components Supporting objects
Database Systems Handbook BY: MUHAMMAD SHARIF 367 Shared components object Utility components Remote development
Database Systems Handbook BY: MUHAMMAD SHARIF 368
Database Systems Handbook BY: MUHAMMAD SHARIF 369
Database Systems Handbook BY: MUHAMMAD SHARIF 370 How to use APEX for free Autonomous Always Free – you can choose the Autonomous Always Free option, running either on ATP or AWS. It’s free for commercial use, but it doesn’t benefit from the scalability of the autonomous databases. Oracle Express Free Edition – you can also run a free version, which is called Oracle Express Free Edition, on- premise, but in this case, there’s a limit on how much data you can store there. Fan-made and official containers – there are also various fan-made and official containers with APEX installed available on the Internet. About Assigning Oracle Default Schemas to Workspaces In order for an Instance administrator to assign most Oracle default schemas to workspaces, a DBA must explicitly grant the privilege. When Oracle Application Express installs, the Instance administrator does not have the ability to assign Oracle default schemas to workspaces. Default schemas such as SYS, SYSTEM, and RMAN are reserved by Oracle for various product features and for internal use. Access to a default schema can be a very powerful privilege. For example, a workspace with access to the default schema SYSTEM can run applications that parse as the SYSTEM user. In order for an Instance administrator to have the ability to assign most Oracle default schemas to workspaces, the DBA must explicitly grant the privilege using SQL*Plus to run a procedure within the APEX_INSTANCE_ADMIN package. Granting the Privilege to Assign Oracle Default Schemas DBAs can grant an Instance administrator the ability to assign Oracle schemas to workspaces. A DBA grants an Instance administrator the ability to assign Oracle schemas to workspaces by using SQL*Plus to run the APEX_INSTANCE_ADMIN.UNRESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC APEX_INSTANCE_ADMIN.UNRESTRICT_SCHEMA(p_schema => ‘RMAN’); COMMIT; Revoking the Privilege to Assign Oracle Default Schemas DBAs can revoke the privilege to assign default schemas. A DBA revokes the privilege to assign default schemas using SQL*Plus to run the APEX_INSTANCE_ADMIN.RESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC APEX_180100.APEX_INSTANCE_ADMIN.RESTRICT_SCHEMA(p_schema => ‘RMAN’); COMMIT;
Database Systems Handbook BY: MUHAMMAD SHARIF 371 This example would prevent the Instance administrator from assigning the RMAN schema to any workspace. It does not, however, prevent workspaces that have already had the RMAN schema assigned to them from using the RMAN schema. Granting the Privilege to Assign Oracle Default Schemas The DBA can grant an Oracle Application Express administrator the ability to assign Oracle default schemas to workspaces by using SQL*Plus to run the APEX_SITE_ADMIN_PRIVS.UNRESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC FLOWS_030100.APEX_SITE_ADMIN_PRIVS.UNRESTRICT_SCHEMA(p_schema => ‘SYSTEM’); COMMIT; This example would enable the Oracle Application Express administrator to assign the SYSTEM schema to any workspace. Revoking the Privilege to Assign Oracle Default Schemas The DBA can revoke this privilege using SQL*Plus to run the APEX_SITE_ADMIN_PRIVS.RESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC FLOWS_030100.APEX_SITE_ADMIN_PRIVS.RESTRICT_SCHEMA(p_schema => ‘SYSTEM’); COMMIT; This example would display the text of a query that dumps the tables that defines the schema and workspace restrictions. SELECT a.schema “SCHEMA”,b.workspace_name “WORKSPACE” FROM WWV_FLOW_RESTRICTED_SCHEMAS a, WWV_FLOW_RSCHEMA_EXCEPTIONS b WHERE b.schema_id (+)= a.id;
Database Systems Handbook BY: MUHAMMAD SHARIF 372 Oracle Application/workspace schema assignments Oracle APEX Oracle APEX_APPLICATION VIEWS Functionalities APEX_APPLICATIONS Applications defined in the current workspace or database user. APEX_WORKSPACES APEX_APPLICATION_ALL_AUTH All authorization schemes for all components by Application APEX_APPLICATIONS APEX_APPLICATION_AUTH Identifies the available Authentication Schemes defined for an Application APEX_APPLICATIONS APEX_APPLICATION_AUTHORIZATION Identifies Authorization Schemes which can be applied at the application, page or component level APEX_APPLICATIONS
Database Systems Handbook BY: MUHAMMAD SHARIF 373 APEX_APPLICATION_BC_ENTRIES Identifies Breadcrumb Entries which map to a Page and identify a pages parent APEX_APPLICATIONS APEX_APPLICATION_BREADCRUMBS Identifies the definition of a collection of Breadcrumb Entries which are used to identify a page Hierarchy APEX_APPLICATIONS APEX_APPLICATION_BUILD_OPTIONS Identifies Build Options available to an application APEX_APPLICATIONS APEX_APPLICATION_CACHING Applications defined in the current workspace or database user. APEX_APPLICATIONS APEX_APPLICATION_COMPUTATIONS Identifies Application Computations which can run for every page or on login APEX_APPLICATIONS APEX_APPLICATION_GROUPS Application Groups defined per workspace. Applications can be associated with an application group. APEX_APPLICATIONS APEX_APPLICATION_ITEMS Identifies Application Items used to maintain session state that are not associated with a page APEX_APPLICATIONS APEX_APPLICATION_LISTS Identifies a named collection of Application List Entries which can be included on any page using a region of type List. Display attributes are controlled using a List Template. APEX_APPLICATIONS APEX_APPLICATION_LIST_ENTRIES Identifies the List Entries which define a List. List Entries can be hierarchical or flat. APEX_APPLICATION_LISTS APEX_APPLICATION_LOCKED_PAGES Locked pages of an application APEX_APPLICATIONS APEX_APPLICATION_LOVS Identifies a shared list of values that can be referenced by a Page Item or Report Column APEX_APPLICATIONS APEX_APPLICATION_LOV_COLS Identifies column metadata for a shared list of values. APEX_APPLICATION_LOVS
Database Systems Handbook BY: MUHAMMAD SHARIF 374 APEX_APPLICATION_LOV_ENTRIES Identifies the List of Values Entries which comprise a shared List of Values APEX_APPLICATION_LOVS APEX_APPLICATION_NAV_BAR Identifies navigation bar entries displayed on pages that use a Page Template that include a #NAVIGATION_BAR# substitution string APEX_APPLICATIONS APEX_APPLICATION_PAGES A Page definition is the basic building block of page. Page components including regions, items, buttons, computations, branches, validations, and processes further define the definition of a page. APEX_APPLICATIONS APEX_APPLICATION_PAGE_BRANCHES Identifies branch processing associated with a page. A branch is a directive to navigate to a page or URL which is run at the conclusion of page accept processing. APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_BUTTONS Identifies buttons associated with a Page and Region APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_CHARTS Identifies a chart associated with a Page and Region. APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_CHART_A Identifies a chart axis associated with a chart on a Page and Region. APEX_APPLICATION_PAGE_CHARTS APEX_APPLICATION_PAGE_CHART_S Identifies a chart series associated with a chart on a Page and Region. APEX_APPLICATION_PAGE_CHARTS APEX_APPLICATION_PAGE_COMP Identifies the computation of Item Session State APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_DA Identifies Dynamic Actions associated with a Page APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_DA_ACTS Identifies the Actions of a Dynamic Action associated with a Page APEX_APPLICATION_PAGE_DA
Database Systems Handbook BY: MUHAMMAD SHARIF 375 APEX_APPLICATION_PAGE_DB_ITEMS Identifies Page Items which are associated with Database Table Columns. This view represents a subset of the items in the APEX_APPLICATION_PAGE_ITEMS view. APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_GROUPS Identifies page groups APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_IR Identifies attributes of an interactive report APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_IR_CAT Report column category definitions for interactive report columns APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_IR_CGRPS Column group definitions for interactive report columns APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_IR_COL Report column definitions for interactive report columns APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_IR_COMP Identifies computations defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_COND Identifies filters and highlights defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_GRPBY Identifies group by view defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_PIVOT Identifies pivot view defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_PVAGG Identifies aggregates defined for a pivot view in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_PVSRT Identifies sorts defined for a pivot view in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_RPT Identifies user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR
Database Systems Handbook BY: MUHAMMAD SHARIF 376 APEX_APPLICATION_PAGE_IR_SUB Identifies subscriptions scheduled in saved reports for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_ITEMS Identifies Page Items which are used to render HTML form content. Items automatically maintain session state which can be accessed using bind variables or substitution stings. APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_MAP Identifies the full breadcrumb path for each page with a breadcrumb entry APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_PROC Identifies SQL or PL/SQL processing associated with a page APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_REGIONS Identifies a content container associated with a Page and displayed within a position defined by the Page Template APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_REG_COLS Region column definitions used for regions APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_RPT Printing attributes for regions that are reports APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_RPT_COLS Report column definitions used for Classic Reports, Tabular Forms and Interactive Reports APEX_APPLICATION_PAGE_RPT APEX_APPLICATION_PAGE_TREES Identifies a tree control which can be referenced and displayed by creating a region with a source of this tree APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_VAL Identifies Validations associated with an Application Page APEX_APPLICATION_PAGES APEX_APPLICATION_PARENT_TABS Identifies a collection of tabs called a Tab Set. Each tab is part of a tab set and can be current for one or more pages. Each tab can also have a corresponding Parent Tab if two levels of Tabs are defined. APEX_APPLICATIONS
Database Systems Handbook BY: MUHAMMAD SHARIF 377 APEX_APPLICATION_PROCESSES Identifies Application Processes which can run for every page, on login or upon demand APEX_APPLICATIONS APEX_APPLICATION_RPT_LAYOUTS Identifies report layout which can be referenced by report queries and classic reports APEX_APPLICATIONS APEX_APPLICATION_RPT_QRY_STMTS Identifies 377 ndividual SQL statements, which are part of a report quert APEX_APPLICATION_RPT_QUERIES APEX_APPLICATION_RPT_QUERIES Identifies report queries, which are printable documents that can be integrated with an application using buttons, list items, branches APEX_APPLICATIONS APEX_APPLICATION_SETTINGS Identifies application settings typically used by applications to manage configuration parameter names and values APEX_APPLICATIONS APEX_APPLICATION_SHORTCUTS Identifies Application Shortcuts which can be referenced “MY_SHORTCUT” syntax APEX_APPLICATIONS APEX_APPLICATION_STATIC_FILES Stores the files like CSS, images, javascript files, … of an application. APEX_APPLICATIONS APEX_APPLICATION_SUBSTITUTIONS Application level definitions of substitution strings. APEX_APPLICATIONS APEX_APPLICATION_SUPP_OBJECTS Identifies the Supporting Object installation messages APEX_APPLICATIONS APEX_APPLICATION_SUPP_OBJ_BOPT Identifies the Application Build Options that will be exposed to the Supporting Object installation APEX_APPLICATION_SUPP_OBJECTS APEX_APPLICATION_SUPP_OBJ_CHCK Identifies the Supporting Object pre-installation checks to ensure the database is compatible with the objects to be installed APEX_APPLICATION_SUPP_OBJECTS APEX_APPLICATION_SUPP_OBJ_SCR Identifies the Supporting Object installation SQL Scripts APEX_APPLICATION_SUPP_OBJECTS
Database Systems Handbook BY: MUHAMMAD SHARIF 378 APEX_APPLICATION_TABS Identifies a set of tabs collected into tab sets which are associated with a Standard Tab Entry APEX_APPLICATIONS APEX_APPLICATION_TEMPLATES Identifies reference counts for templates of all types APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_BC Identifies the HTML template markup used to render a Breadcrumb APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_BUTTON Identifies the HTML template markup used to display a Button APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_CALENDAR Identifies the HTML template markup used to display a Calendar APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_LABEL Identifies a Page Item Label HTML template display attributes APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_LIST Identifies HTML template markup used to render a List with List Elements APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_PAGE The Page Template which identifies the HTML used to organized and render a page content APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_POPUPLOV Identifies the HTML template markup and some functionality of all Popup List of Values controls for this application APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_REGION Identifies a regions HTML template display attributes APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_REPORT Identifies the HTML template markup used to render a Report Headings and Rows APEX_APPLICATION_THEMES APEX_APPLICATION_THEMES Identifies a named collection of Templates APEX_APPLICATIONS APEX_APPLICATION_THEME_FILES Stores the files like CSS, images, javascript files, … of a theme. APEX_APPLICATION_THEMES
Database Systems Handbook BY: MUHAMMAD SHARIF 379 APEX_APPLICATION_THEME_STYLES The Theme Style identifies the CSS file URLs which should be used for a theme APEX_APPLICATION_THEMES APEX_APPLICATION_TRANSLATIONS Identifies message primary language text and translated text APEX_APPLICATIONS APEX_APPLICATION_TRANS_DYNAMIC Application dynamic translations. These are created in the Translation section of Shared Components, and referenced at runtime via the function APEX_LANG.LANG. APEX_APPLICATIONS APEX_APPLICATION_TRANS_MAP Application Groups defined per workspace. Applications can be associated with an application group. APEX_APPLICATIONS APEX_APPLICATION_TRANS_REPOS Repository of translation strings. These are populated from the translation seeding process. APEX_APPLICATIONS APEX_APPLICATION_TREES Identifies a tree control which can be referenced and displayed by creating a region with a source of this tree APEX_APPLICATIONS APEX_APPLICATION_WEB_SERVICES Web Services referenceable from this Application APEX_APPLICATIONS APEX_APPL_ACL_ROLES Identifies Application Roles, which are workspace groups that are tied to a specific application APEX_APPLICATIONS APEX_APPL_ACL_USERS Identifies Application Users used to map application users to application roles APEX_APPLICATIONS APEX_APPL_ACL_USER_ROLES Identifies Application Users used to map application users to application roles APEX_APPL_ACL_USERS APEX_APPL_AUTOMATIONS Stores the meta data for automations of an application. APEX_APPLICATIONS APEX_APPL_AUTOMATION_ACTIONS Identifies actions associated with an automation APEX_APPLICATIONS
Database Systems Handbook BY: MUHAMMAD SHARIF 380 APEX_APPL_CONCATENATED_FILES Concatenated files of a user interface APEX_APPLICATIONS APEX_APPL_DATA_LOADS Identifies Application Data Load definitions APEX_APPLICATIONS APEX_APPL_DATA_PROFILES Available Data Profiles used for parsing CSV, XLSX, JSON, XML and other data APEX_APPLICATIONS APEX_APPL_DATA_PROFILE_COLS Data Profile columns used for parsing JSON, XML and other data APEX_APPLICATIONS APEX_APPL_DEVELOPER_COMMENTS Developer comments of an application APEX_APPLICATIONS APEX_APPL_EMAIL_TEMPLATES Stores the meta data for the email templates of an application. APEX_APPLICATIONS APEX_APPL_LOAD_TABLES Identifies Application Legacy Data Loading definitions APEX_APPLICATIONS APEX_APPL_LOAD_TABLE_LOOKUPS Identifies a the collection of key lookups of the data loading tables APEX_APPLICATIONS APEX_APPL_LOAD_TABLE_RULES Identifies a collection of transformation rules that are to be used on the load tables. APEX_APPLICATIONS APEX_APPL_PAGE_CALENDARS Identifies Application Calendars APEX_APPLICATION_PAGES APEX_APPL_PAGE_CARDS Cards definitions APEX_APPLICATION_PAGE_REGIONS APEX_APPL_PAGE_CARD_ACTIONS Card actions definitions APEX_APPL_PAGE_CARDS Some prerequsites to install Oracle apex and ORDS are:
Database Systems Handbook BY: MUHAMMAD SHARIF 381 Setting up Oracle REST Data Services requires two steps: 1. Configuration, which creates the configuration files needed to run ORDS. 2. Installation, which allows ORDS to run and be called from a front end web server: standalone / Jetty, WebLogic Server, Tomcat or Glassfish. This article presents how to install and configure Apex 21.2 with standalone ORDS 21.2 In previous versions an upgrade was required when a release affected the first two numbers of the version (4.2 to 5.0 or 5.1 to 18.1), but if the first two numbers of the version were not affected (5.1.3 to 5.1.4) you had to download and apply a patch, rather than do the full installation. This is no longer the case. Steps Setup (Download both software having equal version and paste unzip files at same location in directory) Installation Embedded PL/SQL Gateway (EPG) Configuration Oracle REST Data Services (ORDS) Configuration Oracle HTTP Server (OHS) Configuration
Database Systems Handbook BY: MUHAMMAD SHARIF 382 Network ACLs Step One Create a new tablespace to act as the default tablespace for APEX. -- For Oracle Managed Files (OMF). CREATE TABLESPACE apex DATAFILE SIZE 100M AUTOEXTEND ON NEXT 1M; -- For non-OMF. CREATE TABLESPACE apex DATAFILE ‘/path/to/datafiles/apex01.dbf’ SIZE 100M AUTOEXTEND ON NEXT 1M; CREATE TABLESPACE lmtbsb DATAFILE ‘/u02/oracle/data/lmtbsb01.dbf’ SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; Create or alter database create tablespace alter data file command CREATE TABLESPACE lmtbsb DATAFILE ‘/u02/oracle/data/lmtbsb01.dbf’ SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M; which set the initial space for 10 tablespaces to around 1032Kb each.
Database Systems Handbook BY: MUHAMMAD SHARIF 383
Database Systems Handbook BY: MUHAMMAD SHARIF 384 Managing Space in Tablespaces Tablespaces allocate space in extents. Tablespaces can use two different methods to keep track of their free and used space:  Locally managed tablespaces: Extent management by the tablespace  Dictionary managed tablespaces: Extent management by the data dictionary When you create a tablespace, you choose one of these methods of space management. Later, you can change the management method with the DBMS_SPACE_ADMIN PL/SQL package. Step two Installation
Database Systems Handbook BY: MUHAMMAD SHARIF 385 Change directory to the directory holding the unzipped APEX software. $ cd /home/oracle/apex In this directory there are 3 important files: apexins.sql – install apex in database apxchpwd.sql – change password for main apex user ADMIN apex_rest_config.sql – configures ords in database Step three IF: Connect to SQL*Plus as the SYS user and run the "apexins.sql" script, specifying the relevant tablespace names and image URL. SQL> CONN sys@pdb1 AS SYSDBA SQL> -- @apexins.sql tablespace_apex tablespace_files tablespace_temp images SQL> @apexins.sql APEX APEX TEMP /i/ Or Else Logon to database as SYSDBA and switch to pluggable database orclpdb1 and run installation script. You can install apex on dedicated tablespaces if required. sqlplus / as sysdba alter session set container=orclpdb1; @apexins.sql SYSAUX SYSAUX TEMP /i/ (Description of the command: @apexins.sql tablespace_apex tablespace_files tablespace_temp images tablespace_apex - name of the tablespace for APEX user. tablespace_files - name of the tablespace for APEX files user. tablespace_temp - name of the temporary tablespace. images - virtual directory for APEX images. Define the virtual image directory as /i/ for future updates.) Step four If you want to add the user silently, you could run the following code, specifying the required password and email. BEGIN APEX_UTIL.set_security_group_id( 10 ); APEX_UTIL.create_user(
Database Systems Handbook BY: MUHAMMAD SHARIF 386 p_user_name => 'ADMIN', p_email_address => 'me@example.com', p_web_password => 'PutPasswordHere', p_developer_privs => 'ADMIN' ); APEX_UTIL.set_security_group_id( null ); COMMIT; END; / Note: Oracle Application Express is installed in the APEX_210200 schema. The structure of the link to the Application Express administration services is as follows: http://host:port/ords/apex_admin The structure of the link to the Application Express development interface is as follows: http://host:port/ords Or When Oracle Application Express installs, it creates three new database accounts all with status LOCKED in database: APEX_210200– The account that owns the Oracle Application Express schema and metadata. FLOWS_FILES – The account that owns the Oracle Application Express uploaded files. APEX_PUBLIC_USER – The minimally privileged account is used for Oracle Application Express configuration with ORDS. Create and change password for ADMIN account. When prompted enter a password for the ADMIN account. sqlplus / as sysdba alter session set container=orclpdb1; @apxchpwd.sql output SQL> @apxchpwd.sql This script can be used to change the password of an Application Express
Database Systems Handbook BY: MUHAMMAD SHARIF 387 instance administrator. If the user does not yet exist, a user record will be created. Enter the administrator's username [ADMIN] User "ADMIN" does not yet exist and will be created. Enter ADMIN's email [ADMIN] Enter ADMIN's password [] Created instance administrator ADMIN. Step Five Create the APEX_LISTENER and APEX_REST_PUBLIC_USER users by running the "apex_rest_config.sql" script. SQL> CONN sys@pdb1 AS SYSDBA SQL> @apex_rest_config.sql Configure RESTful Services. When prompted enter a password for the APEX_LISTENER, APEX_REST_PUBLIC_USER account. sqlplus / as sysdba alter session set container=orclpdb1; @apex_rest_config.sql output SQL> @apex_rest_config.sql Enter a password for the APEX_LISTENER user [] Enter a password for the APEX_REST_PUBLIC_USER user [] ...set_appun.sql ...setting session environment ...create APEX_LISTENER and APEX_REST_PUBLIC_USER users ...grants for APEX_LISTENER and ORDS_METADATA user as last step you can modify again passwords for 3 users: ALTER USER apex_public_user IDENTIFIED BY Dbaora$ ACCOUNT UNLOCK; ALTER USER apex_listener IDENTIFIED BY Dbaora$ ACCOUNT UNLOCK; ALTER USER apex_rest_public_user IDENTIFIED BY Dbaora$ ACCOUNT UNLOCK; Install and configure You can install and configure APEX and ORDS by using the following methods:
Database Systems Handbook BY: MUHAMMAD SHARIF 388  Install APEX and ORDS and configure ORDS.  Install APEX and configure a web listener: embedded PL/SQL gateway.  Install APEX and configure the legacy web listener: Oracle HTTP Server. For this post, I chose the first option, which Oracle recommends: Install APEX and ORDS and configure ORDS. Step Six Now you need to decide which gateway to use to access APEX. The Oracle recommendation is ORDS. Note: Oracle REST Data Services (ORDS), formerly known as the APEX Listener, allows APEX applications to be deployed without the use of Oracle HTTP Server (OHS) and mod_plsql or the Embedded PL/SQL Gateway. ORDS version 3.0 onward also includes JSON API support to work in conjunction with the JSON support in the database. ORDS can be deployed on WebLogic, Tomcat or run in standalone mode. This article describes the installation of ORDS on Tomcat 8 and 9. For Lone-PDB installations (a CDB with one PDB), or for CDBs with small numbers of PDBs, ORDS can be installed directly into the PDB. If you are using many PDBs per CDB, you may prefer to install ORDS into the CDB to allow all PDBs to share the same connection pool. Create directory /home/oracle/ords for ords software and unzip it mkdir /home/oracle/ords cp ords-21.4.2.062.1806.zip /home/oracle/ords cd /home/oracle/ords unzip ords-21.4.2.062.1806.zip Create configuration directory /home/oracle/ords/conf for ords standalone mkdir /home/oracle/ords/conf Run ords first time you are asked for: directory to save configuration: /home/oracle/ords/conf password for ORDS_PUBLIC_USER(be created): Dbaora$ administrator user: SYS password for SYS AS SYSDBA: !!! you must know it from your DBA !!! use PL/SQL Gateway or not: 1 for yes password for APEX_PUBLIC_USER: Dbaora$ password for APEX_LISTENER: Dbaora$ feature to enable: 1 for SQL Developer Web (Enables all features) wish to start in standalone mode: 1 for standalone mode
Database Systems Handbook BY: MUHAMMAD SHARIF 389 [oracle@oel8 ords]$ java -jar ords.war This Oracle REST Data Services instance has not yet been configured. Please complete the following prompts Enter the location to store configuration data: /home/oracle/ords/conf Enter the database password for ORDS_PUBLIC_USER: Confirm password: Requires to login with administrator privileges to verify Oracle REST Data Services schema. Enter the administrator username:sys Enter the database password for SYS AS SYSDBA: Confirm password: Connecting to database user: SYS AS SYSDBA url: jdbc:oracle:thin:@//oel8.dbaora.com:1521/orclpdb1 Retrieving information. Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step. If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]: Enter the database password for APEX_PUBLIC_USER: Confirm password: Enter the database password for APEX_LISTENER: Confirm password: Enter the database password for APEX_REST_PUBLIC_USER: Confirm password: Enter a number to select a feature to enable: [1] SQL Developer Web (Enables all features) [2] REST Enabled SQL [3] Database API [4] REST Enabled SQL and Database API [5] None Choose [1]:1 2022-03-19T18:40:34.543Z INFO reloaded pools: [] Installing Oracle REST Data Services version 21.4.2.r0621806 ... Log file written to /home/oracle/ords_install_core_2022-03-19_194034_00664.log ... Verified database prerequisites
Database Systems Handbook BY: MUHAMMAD SHARIF 390 ... Created Oracle REST Data Services proxy user ... Created Oracle REST Data Services schema ... Granted privileges to Oracle REST Data Services ... Created Oracle REST Data Services database objects ... Log file written to /home/oracle/ords_install_datamodel_2022-03-19_194044_00387.log ... Log file written to /home/oracle/ords_install_scheduler_2022-03-19_194045_00075.log ... Log file written to /home/oracle/ords_install_apex_2022-03-19_194046_00484.log Completed installation for Oracle REST Data Services version 21.4.2.r0621806. Elapsed time: 00:00:12.611 Enter 1 if you wish to start in standalone mode or 2 to exit [1]:1 Enter 1 if using HTTP or 2 if using HTTPS [1]: Choose [1]:1 As a result ORDS will be running in standalone mode and configured so you can try to logon to apex. After reboot of machine start ORDS in standalone mode in background as following: cd /home/oracle/ords java -jar ords.war standalone & Verify APEX is working Administration page http://hostname:port/ords In this case http://oel8.dbaora.com:8080/ords OR Embedded PL/SQL Gateway (EPG) Configuration If you want to use the Embedded PL/SQL Gateway (EPG) to front APEX, you can follow the instructions here. This is used for both the first installation and upgrades. Run the "apex_epg_config.sql" script, passing in the base directory of the installation software as a parameter. SQL> CONN sys@pdb1 AS SYSDBA SQL> @apex_epg_config.sql /home/oracle OR Oracle HTTP Server (OHS) Configuration If you want to use Oracle HTTP Server (OHS) to front APEX, you can follow the instructions here.
Database Systems Handbook BY: MUHAMMAD SHARIF 391 Change the password and unlock the APEX_PUBLIC_USER account. This will be used for any Database Access Descriptors (DADs). SQL> ALTER USER APEX_PUBLIC_USER IDENTIFIED BY myPassword ACCOUNT UNLOCK; Step Seven Unlock the ANONYMOUS account. SQL> CONN sys@cdb1 AS SYSDBA DECLARE l_passwd VARCHAR2(40); BEGIN l_passwd := DBMS_RANDOM.string('a',10) || DBMS_RANDOM.string('x',10) || '1#'; -- Remove CONTAINER=ALL for non-CDB environments. EXECUTE IMMEDIATE 'ALTER USER anonymous IDENTIFIED BY ' || l_passwd || ' ACCOUNT UNLOCK CONTAINER=ALL'; END; / Check the port setting for XML DB Protocol Server. SQL> CONN sys@pdb1 AS SYSDBA SQL> SELECT DBMS_XDB.gethttpport FROM DUAL; GETHTTPPORT ----------- 0 1 row selected. SQL> If it is set to "0", you will need to set it to a non-zero value to enable it. SQL> CONN sys@pdb1 AS SYSDBA SQL> EXEC DBMS_XDB.sethttpport(8080); Now you apex is available at ulr:8080/apex/ Recovery or uninstallation of ORDS Starting/Stopping ORDS Under Tomcat
Database Systems Handbook BY: MUHAMMAD SHARIF 392 ORDS is started or stopped by starting or stopping the Tomcat instance it is deployed to. Assuming you have the CATALINA_HOME environment variable set correctly, the following commands should be used. Oracle now supports Oracle REST Data Services (ORDS) running in standalone mode using the built-in Jetty web server, so you no longer need to worry about installing WebLogic, Glassfish or Tomcat unless you have a compelling reason to do so. Removing this extra layer means one less layer to learn and one less layer to patch. ORDS can run as a standalone app with a built in webserver. This is perfect for local development purposes but in the real world you will want a decent java application server (Tomcat, Glassfish or Weblogic) with a webserver in front of it (Apache or Nginx). export CATALINA_OPTS="$CATALINA_OPTS -Duser.timezone=UTC" $ $CATALINA_HOME/bin/startup.sh $ $CATALINA_HOME/bin/shutdown.sh ORDS Validate You can validate/fix the current ORDS installation using the validate option. $ $JAVA_HOME/bin/java -jar ords.war validate Enter the name of the database server [ol7-122.localdomain]: Enter the database listen port [1521]: Enter the database service name [pdb1]: Requires SYS AS SYSDBA to verify Oracle REST Data Services schema. Enter the database password for SYS AS SYSDBA: Confirm password: Retrieving information. Oracle REST Data Services will be validated. Validating Oracle REST Data Services schema version 18.2.0.r1831332 ... Log file written to /u01/asi_test/ords/logs/ords_validate_core_2018-08-07_160549_00215.log Completed validating Oracle REST Data Services version 18.2.0.r1831332. Elapsed time: 00:00:06.898 $ Manual ORDS Uninstall In recent versions you can use the following command to uninstall ORDS and provide the information when prompted. # su - tomcat $ cd /u01/ords
Database Systems Handbook BY: MUHAMMAD SHARIF 393 $ $JAVA_HOME/bin/java -jar ords.war uninstall Enter the name of the database server [ol7-122.localdomain]: Enter the database listen port [1521]: Enter 1 to specify the database service name, or 2 to specify the database SID [1]: Enter the database service name [pdb1]: Requires SYS AS SYSDBA to verify Oracle REST Data Services schema. Enter the database password for SYS AS SYSDBA: Confirm password: Retrieving information Uninstalling Oracle REST Data Services ... Log file written to /u01/ords/logs/ords_uninstall_core_2018-06-14_155123_00142.log Completed uninstall for Oracle REST Data Services. Elapsed time: 00:00:10.876 $ In older versions of ORDS you had to extract scripts to perform the uninstall in the following way. su - tomcat cd /u01/ords $JAVA_HOME/bin/java -jar ords.war ords-scripts --scriptdir /tmp Perform the uninstall from the "oracle" user using the following commands. su -oracle cd /tmp/scripts/uninstall/core/ sqlplus sys@pdb1 as sysdba @ords_manual_uninstall /tmp/scripts/logs What is an APEX Workspace? An APEX Workspace is a logical domain where you define APEX applications. Each workspace is associated with one or more database schemas (database users) which are used to store the database objects, such as tables, views, packages, and more. APEX applications are built on top of these database objects.
Database Systems Handbook BY: MUHAMMAD SHARIF 394 What is a Workspace Administrator? Workspace administrators have all the rights and privileges available to developer and manage administrator tasks specific to a workspace. In Oracle Application Express, users sign in to a shared work area called a workspace. A workspace enables multiple users to work within the same Oracle Application Express installation while keeping their objects, data and applications private. This flexible architecture enables a single database instance to manage thousands of applications.
Database Systems Handbook BY: MUHAMMAD SHARIF 395 Within a workspace, End users can only run existing database or Websheet application. Developers can create and edit applications, monitor workspace activity, and view dashboards. Oracle Application Express includes two administrator roles: 1. Workspace administrators are users who perform administrator tasks specific to a workspace. 2. Instance administrators are superusers that manage an entire hosted Oracle Application Express instance which may contain multiple workspaces. Workspace administrators can reset passwords, view product and environment information, manage the Export repository, manage saved interactive reports, view the workspace summary report, and manage Websheet database objects. Additionally, workspace administrators manage service requests, configure workspace preferences, manage user accounts, monitor workspace activity, and view log files. Apex Development Models and RAD development One might think that since APEX is a development framework, there is no need for methodology. After all, it is a Rapid Application Development (RAD) tool. When developing applications using Application Builder, you must find a balance between two dramatically different development methodologies:
Database Systems Handbook BY: MUHAMMAD SHARIF 396 Iterative, rapid application development or Planned, linear style development The first approach offers so much flexibility that you run the risk of never completing your project. In contrast, the second approach can yield applications that do not meet the needs of end users even if they meet the stated requirements on paper. Oracle APEX is a full spectrum technology. It can be used by so-called citizen developers, who can use the wizard to create some simple applications to get going. However, these people can team up with a technical developer to create a more complex application together, and in such a case it also goes full spectrum – code by code, line by line, back-end development, front-end development, database development. If you get a perfect mix of front-end and back-end developers, then you can create a truly great APEX application. System Development Life Cycle Methodologies to Consider The system development life cycle (SDLC) is the overall process of developing software using a series of defined steps. There are several SDLC models that work well for developing applications in Oracle Application Express. Our methodology is composed of different elements related to all aspects of an APEX development project.
Database Systems Handbook BY: MUHAMMAD SHARIF 397 This methodology is referred to as a waterfall because the output from one stage is the input for the next stage. A primary problem with this approach is that it is assumed that all requirements can be established in advance. Unfortunately, requirements often change and evolve during the development process. The Oracle Application Express development environment enables developers to take a more iterative approach to development. Unlike many other development environments, creating prototypes is easy. With Oracle Application Express, developers can: Use built-in wizards to quickly design an application user interface Make prototypes available to users and gather feedback Implement changes in real time, creating new prototypes instantly So how do i create such an app? I sign in to the APEX workspace, click the Create button, and choose the New application option. I called my app “Warsaw Air Quality Log”. For features, I select an About Page, Configuration Options, Activity Reporting, and Theme Style Selection. I leave the rest of the fields blank for now and instead, I just click Create Application. As you’ll see when you check it out for yourselves, creating a basic app is very quick. Of course, I could’ve added more pages there, ticked more options – but that’s what we need for now. Apex Development
Database Systems Handbook BY: MUHAMMAD SHARIF 398 Deployment options to consider include: Use the same workspace and same schema. Export and then import the application and install it using a different application ID. This approach works well when there are few changes to the underlying objects, but frequent changes to the application functionality. Use a different workspace and same schema. Export and then import the application into a different workspace. This is an effective way to prevent a production application from being modified by developers. Use a different workspace and different schema. Export and then import the application into a different workspace and install it so that it uses a different schema. This new schema needs to have the database objects required by your application. See "Using the Database Object Dependencies Report". Use a different database with all its variations. Export and then import the application into a different Oracle Application Express instance and install it using a different workspace, schema, and database.
Database Systems Handbook BY: MUHAMMAD SHARIF 399 Migration of Applications Migration of oracle forms to Apex forms After converting your forms files into XML files, sign into your APEX workspace and be sure you're using the schema that contains all database objects needed in the forms. Now, create a Migration Project and upload the XML files, following these steps: 1. Click App Builder. 2. Navigate to the right panel, click Oracle Forms Migrations. 3. Click Create Project. 4. Enter Project Name and Description. 5. Select the schema. 6. Upload the XML file. 7. Click Next. 8. Click Upload Another File if you have more XML files, otherwise click Create. Now let's review each component in the upload forms to determine proper regions to use in the APEX Application. Also, let's review the Triggers and Program Units in order to identify the business logic in your Forms Application and determine if it will need to be replicated or not. Oracle Forms applications still play a vital role, but many are looking for ways to modernize their applications. Modernize your Oracle Forms applications by migrating them to Oracle Application Express (Oracle APEX) in the cloud. Your stored procedures and PL/SQL packages work natively in Oracle APEX, making it the clear platform of choice for easily transitioning Oracle Forms applications to modern web applications with more capabilities, less complexity, and lower development and maintenance costs.
Database Systems Handbook BY: MUHAMMAD SHARIF 400 Oracle APEX is a low-code development platform that enables you to build scalable, secure enterprise apps, with world-class features, that you can deploy anywhere. You can quickly develop and deploy compelling apps that solve real problems and provide immediate value. You won't need to be an expert in a vast array of technologies to deliver sophisticated solutions. Architecture This architecture shows the process of migrating on-premises Oracle Forms applications to Oracle Application Express (APEX) applications with the help of an XML converter, and then moving them to the cloud.The following diagram illustrates this reference architecture. Recommendations for migration of database application Use the following recommendations as a starting point to plan your migration to Oracle Application Express.Your requirements might differ from the architecture described here. VCN When you create a VCN, determine how many IP addresses your cloud resources in each subnet require. Using Classless Inter-Domain Routing (CIDR) notation, specify a subnet mask and a network address range large enough for the required IP addresses. Use CIDR blocks that are within the standard private IP address space. After you create a VCN, you can change, add, and remove its CIDR blocks. When you design the subnets, consider functionality and security requirements. All compute instances within the same tier or role should go into the same subnet.
Database Systems Handbook BY: MUHAMMAD SHARIF 401 Use regional subnets. Security lists Use security lists to define ingress and egress rules that apply to the entire subnet. Cloud Guard Clone and customize the default recipes provided by Oracle to create custom detector and responder recipes. These recipes enable you to specify what type of security violations generate a warning and what actions are allowed to be performed on them. For example, you might want to detect Object Storage buckets that have visibility set to public. Apply Cloud Guard at the tenancy level to cover the broadest scope and to reduce the administrative burden of maintaining multiple configurations. You can also use the Managed List feature to apply certain configurations to detectors. Security Zones For resources that require maximum security, Oracle recommends that you use security zones. A security zone is a compartment associated with an Oracle-defined recipe of security policies that are based on best practices. For example, the resources in a security zone must not be accessible from the public internet and they must be encrypted using customer-managed keys. When you create and update resources in a security zone, Oracle Cloud Infrastructure validates the operations against the policies in the security-zone recipe, and denies operations that violate any of the policies. Schema Retain the database structure that Oracle Forms was built on, as is, and use that as the schema for Oracle APEX. Business Logic Most of the business logic for Oracle Forms is in triggers, program units, and events. Before starting the migration of Oracle Forms to Oracle APEX, migrate the business logic to stored procedures, functions, and packages in the database. Considerations Consider the following key items when migrating Oracle Forms Object navigator components to Oracle Application Express (APEX): Data Blocks A data block from Oracle Forms relates to Oracle APEX with each page broken up into several regions and components. Review the Oracle APEX Component Templates available in the Universal Theme. Triggers In Oracle Forms, triggers control almost everything. In Oracle APEX, control is based on flexible conditions that are activated when a page is submitted and are managed by validations, computations, dynamic actions, and processes. Alerts Most messages in Oracle APEX are generated when you submit a page.
Database Systems Handbook BY: MUHAMMAD SHARIF 402 Attached Libraries Oracle APEX takes care of the JavaScript and CSS libraries that support the Universal Theme, which supports all of the components that you need for flexible, dynamic applications. You can include your own JavaScript and CSS in several ways, mostly through page attributes. You can choose to add inline code as reference files that exist either in the database as a BLOB (#APP_IMAGES#) or sit on the middle tier, typically served by Oracle REST Data Services (ORDS). When a reference file is on an Oracle WebLogic Server, the file location is prefixed with #IMAGE_PREFIX#. Editors Oracle APEX has a text area and a rich text editor, which is equivalent to Editors in Oracle Forms. List of Values (LOV) In APEX, the LOV is coupled with the Item type. A radio group works well with a small handful of values. Select Lists for middle-sized sets, and select Popup LOV for large data sets. You can use the queries from Record Group in Oracle Forms for the LOV query in Oracle APEX. LOV's in Oracle APEX can be dynamically driven by a SQL query, or be statically defined. A static definition allows a variety of conditions to be applied to each entry. These LOVs can then be associated with Items such as Radio Groups & Select Lists, or with a column in a report, to translate a code to a label. Parameters Page Items in Oracle APEX are populated between pages to pass information to the next page, such as the selected record in a report. Larger forms with a number of items are generally submitted as a whole, where the page process handles the data, and branches to the next page. These values can be protected from URL tampering by session state security, at item, page, and application levels, often by default. Popup Menus Popup Menus are not available out of the box in Oracle APEX, but you can build them by using Lists and associating a button with the menu. Program Units Migrate the Stored procedures and functions defined in program units in Oracle Forms into Database Stored Procedures/Functions and use Database Stored procedures/functions in Oracle APEX processes/validations/computations. Property Classes Property Classes in Oracle Forms allow the developer to utilize common attributes among each instance of a component. In APEX you can define User Interface Defaults in the data dictionary, so that each time items or reports are created for specific tables or columns, the same features are applied by default. As for the style of the application, you can apply classes to components that carry a particular look and feel. The Universal Theme has a default skin that you can reconfigure declaratively. Record Groups Use queries in Record Groups to define the Dynamic LOV in Oracle APEX.
Database Systems Handbook BY: MUHAMMAD SHARIF 403 Reports Interactive Reports in Oracle APEX come with a number of runtime manipulation options that give users the power to customize and manipulate the reports. Classic Reports are simple reports that don't provide runtime manipulation options, but are based on SQL. Menus Oracle Forms have specific menu files, controlled by database roles. Updating the .mmx file required that there be no active users. The menu in Oracle APEX can either be across the top, or down the left side. These menus can be statically defined, or dynamically driven. Static navigation entries can be controlled by authorization schemes, or custom conditions. Dynamic menus can have security tables integrated within the SQL. Properties The Page Designer introduced in Oracle APEX is similar to Oracle Forms, particularly with regard to the ability to edit multiple components at once, only intersecting attributes. Apex Manage Logs and Files and recovery Page View Activity Logs track user activity for an application. The Application Express engine uses two logs to track user activity. At any given time, one log is designated as current. For each rendered page view, the Application Express engine inserts one row into the log file. A log switch occurs at the interval listed on the Page View Activity Logs page. At that point, the Application Express engine removes all entries in the noncurrent log and designates it as current. SQL Workshop Logs Delete SQL Workshop log entries. The SQL Workshop maintains a history of SQL statements run in the SQL Commands. Workspace Activity Reports logs Workspace administrators are users who perform administrator tasks specific to a workspace and have the access to various types of activity reports. Instance Activity Reports Instance administrators are superusers that manage an entire hosted instance using the Application Express Administration Services application.
Database Systems Handbook BY: MUHAMMAD SHARIF 404 RMAN Backup/Restore If lost the APEX tablespace but your database is currently functioning. If this is the case, and assuming your APEX tablespace does not span multiple datafiles, you can attempt to swap out the datafile. Please force a backup in rman before trying any of this. There are a few different options here. All you really need are the following  Datafile  Control file Archive / redologs (if you want to move forward or backward in time) Then run rman target / from bash terminal. In rman run the following. RESTORE CONTROLFILE FROM '/tmp/oradata/your_ctrl_file_dir' ALTER TABLESPACE apex OFFLINE IMMEDIATE'; SET NEWNAME FOR DATAFILE '/tmp/oradata/apex01.dbf' TO RESTORE TABLESPACE apex; SWITCH DATAFILE ALL; RECOVER TABLESPACE apex;
Database Systems Handbook BY: MUHAMMAD SHARIF 405 Swap out Datafile First find the location of your datafiles. You can find them by running the following in sqlplus / as sysdba or whatever client you use spool '/tmp/spool.out' select value from v$parameter where name = 'db_create_file_dest'; select tablespace name from dba_data_files; View the spool.out file and Verify the location of your datafiles See if the datafile still is associated with that tablespace. If the tablespace is still there run select file_name, status from dba_data_files WHERE tablespace name = < name > You want your your datafile to be available. Then you want to set the tablespace to read only and take it offline alter tablespace < name > read only; alter tablespace < name > offline; Now copy your dbf file the directory returned from querying db_create_file_dest value. Don't overwrite the old one, then run. alter tablespace < name > rename datafile '/u03/waterver/oradata/yourold.dbf' to '/u03/waterver/oradata/yournew.dbf' This updates your controlfile to point to the new datafile. You can then bring your tablespace back online and back in read write mode. You may also want to verify the status of the tablespace status, the name of the datafile associated with that tablespace, etc. APEX version requirements The APEX option uses storage on the DB instance class for your DB instance. Following are the supported versions and approximate storage requirements for Oracle APEX.
Database Systems Handbook BY: MUHAMMAD SHARIF 406 APEX version Storage requirements Supported Oracle Database versions Notes Oracle APEX version 21.1.v1 125 MiB All This version includes patch 32598392: PSE BUNDLE FOR APEX 21.1, PATCH_VERSION 3. Oracle APEX version 20.2.v1 148 MiB All except 21c This version includes patch 32006852: PSE BUNDLE FOR APEX 20.2, PATCH_VERSION 2020.11.12. You can see the patch number and date by running the following query: SELECT PATCH_VERSION, PATCH_NUMBER FROM APEX_PATCHES; Oracle APEX version 20.1.v1 173 MiB All except 21c This version includes patch 30990551: PSE BUNDLE FOR APEX 20.1, PATCH_VERSION 2020.07.15. Oracle APEX version 19.2.v1 149 MiB All except 21c Oracle APEX version 19.1.v1 148 MiB All except 21c Oracle APEX version 18.2.v1 146 MiB 12.1 and 12.2 only Oracle apex authentication and authorizations In addition to authentication and authorization, Oracle has provided an additional functionality called Oracle VPD. VPD stands for “Virtual Private Database” and offers the possibility to implement multi-client capability into APEX web applications. With Oracle VPD and PL/SQL special columns of tables can be declared as conditions to separate data between different clients. An active Oracle VPD automatically adds an SQL WHERE clause to an SQL SELECT statement. This WHERE clause contains the declared columns and thus delivers only data sets that match (row level security).
Database Systems Handbook BY: MUHAMMAD SHARIF 407 Authentication schemes support in Oracle APEX.  Application Express Accounts Application Express Accounts are user accounts that are created within and managed in the Oracle Application Express user repository. When you use this method, your application is authenticated against these accounts.  Custom Authentication Creating a Custom Authentication scheme from scratch to have complete control over your authentication interface.  Database Accounts Database Account Credentials authentication utilizes database schema accounts to authenticate users.  HTTP Header Variable Authenticate users externally by storing the username in a HTTP Header variable set by the web server.  Open Door Credentials Enable anyone to access your application using a built-in login page that captures a user name.  No Authentication (using DAD) Adopts the current database user. This approach can be used in combination with a mod_plsql Database Access Descriptor (DAD) configuration that uses basic authentication to set the database session user.  LDAP Directory Authenticate a user and password with an authentication request to a LDAP server.  Oracle Application Server Single Sign-On Server Delegates authentication to the Oracle AS Single Sign-On (SSO) Server. To use this authentication scheme, your site must have been registered as a partner application with the SSO server.  SAML Sign-In Delegates authentication to the Security Assertion Markup Language (SAML) Sign In authentication scheme.  Social Sign-In Social Sign-In supports authentication with Google, Facebook, and other social network that supports OpenID Connect or OAuth2 standards. Table Authorization Scheme Types When you create an authorization scheme you select an authorization scheme type. The authorization scheme type determines how an authorization scheme is applied. Developers can create new authorization type plug-ins to extend this list.
Database Systems Handbook BY: MUHAMMAD SHARIF 408 Authorization Scheme Types Description Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns at least one row and causes the scheme to fail if it returns no rows NOT Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns no rows and causes the scheme to fail if it returns one or more rows PL/SQL Function Returning Boolean Enter a function body. If the function returns true, the authorization succeeds. Item in Expression 1 is NULL Enter an item name. If the item is null, the authorization succeeds. Item in Expression1 is NOT NULL Enter an item name. If the item is not null, the authorization succeeds. Value of Item in Expression 1 Equals Expression 2 Enter and item name and value.The authorization succeeds if the item's value equals the authorization value. Value of Item in Expression 1 Does NOT Equal Expression 2 Enter an item name and a value. The authorization succeeds if the item's value is not equal to the authorization value. Value of Preference in Expression 1 Does NOT Equal Expression 2 Enter an preference name and a value. The authorization succeeds if the preference's value is not equal to the authorization value. Value of Preference in Expression 1 Equals Expression 2 Enter an preference name and a value. The authorization succeeds if the preference's value equal the authorization value. Is In Group Enter a group name. The authorization succeeds if the group is enabled as a dynamic group for the session. If the application uses Application Express Accounts Authentication, this check also includes workspace groups that are granted to the user. If the application uses Database Authentication, this check also includes database roles that are granted to the user. Is Not In Group Enter a group name. The authorization succeeds if the group is not enabled as a dynamic group for the session. Upgrades of Apex Software The basic steps for upgrading APEX are: Run the APEX installation script against the target database. The same script is used for new installations and upgrades. The script automatically senses whether there is a version of APEX present and automatically takes the appropriate action.
Database Systems Handbook BY: MUHAMMAD SHARIF 409 Update the existing version of the /i/ virtual directory with the images, javascript, css, etc. with the current versions APEX installation medium. For the standard HTTP Server installations, this is just a simple copy command. For the Embedded PL/SQL Gateway (EPG), the script apxldimg.sql is used to load the images into the database. For the APEX Listener / Oracle REST Data Services (ORDS), recreate the i.jar file that contains the references to the images, javascript, css, etc. from the APEX installation media OR copy the new versions of the files to the existing location referenced by the current APEX Listener / ORDS / web server. Check Your Version Prior to the Application Express (APEX) upgrade, begin by identifying the version of the APEX currently installed and the database prerequisites. To do this run the following query in SQLPLUS as SYS or SYSTEM: Where <SCHEMA> represents the current version of APEX and is one of the following:  For APEX (HTML DB) versions 1.5 - 3.1, the schema name is: FLOWS_XXXXXX. For example: FLOWS_010500 for HTML DB version 1.5.x  For APEX (HTML DB) versions 3.2.x and above, the schema name is: APEX_XXXXXX. For example: APEX_210100 for APEX version 21.1.  If the query returns 0, it is a runtime only installation, and apxrtins.sql should be used for the upgrade. If the query returns 1, this is a development install and apexins.sql should be used
Database Systems Handbook BY: MUHAMMAD SHARIF 410 The full download is needed if the first two digits of the APEX version are different. For example, the full Application Express download is needed to go from 20.0 to 21.1. See <Note 752705.1> ORA-1435: User Does not Exist" When Upgrading APEX Using apxpatch.sql: for more information. The patch is needed if only the third digit of the version changes. So when upgrading from from 21.1.0 patch to upgrade to 21.1.2.
Database Systems Handbook BY: MUHAMMAD SHARIF 411 END
Database Systems Handbook BY: MUHAMMAD SHARIF 412 CHAPTER 19 ORACLE WEBLOGIC SERVERS AND ITS CONFIGURATIONS What is Oracle WebLogic Server? Oracle WebLogic Server is a leading e-commerce online transaction processing (OLTP) platform, developed to connect users in distributed computing production environments and to facilitate the integration of mainframe applications with distributed corporate data and applications. History of WebLogic WebLogic server was the first J2EE application server.  1995: WebLogic, Inc. founded.  1997: First release of WebLogic Tengah.  1998: WebLogic, Inc., acquired by BEA Systems.  2008: BEA Systems acquired by Oracle.  2020: WebLogic Server version 14 released.
Database Systems Handbook BY: MUHAMMAD SHARIF 413 WebLogic is an Application Server that runs on a middle tier, between back-end databases and related applications and browser-based thin clients. WebLogic Server mediates the exchange of requests from the client tier with responses from the back-end tier. WebLogic Server is based on Java Platform, Enterprise Edition (Java EE) (formerly known as Java 2 Platform, Enterprise Edition or J2EE), the standard platform used to create Java-based multi-tier enterprise applications. Oracle WebLogic Server vs. Apache Tomcat The Apache Tomcat web server is often compared with WebLogic Server. The Tomcat web server serves static content and dynamic content in web applications delivered in Java servlets and JavaServer Pages. Programming Models WebLogic Server provides complete support for the Java EE 6.0. Web Applications provide the basic Java EE mechanism for deployment of dynamic Web pages based on the Java EE standards of servlets and JavaServer Pages (JSP). Web applications are also used to serve static Web content such as HTML pages and image files. Web Services provide a shared set of functions that are available to other systems on a network and can be used as a component of distributed Web-based applications. XML capabilities include data exchange, and a means to store content independent of its presentation, and more.
Database Systems Handbook BY: MUHAMMAD SHARIF 414 Java Messaging Service (JMS) enables applications to communicate with one another through the exchange of messages. A message is a request, report, and/or event that contains information needed to coordinate communication between different applications. Java Database Connectivity (JDBC) provides pooled access to DBMS resources. Resource Adapters provide connectivity to Enterprise Information Systems (EISes). Enterprise JavaBeans (EJB) provide Java objects to encapsulate data and business logic. Remote Method Invocation (RMI) is the Java standard for distributed object computing, allowing applications to invoke methods on a remote object locally. Security APIs allow you to integrate authentication and authorization into your Java EE applications. You can also use the Security Provider APIs to create your own custom security providers. WebLogic Tuxedo Connectivity (WTC) provides interoperability between WebLogic Server applications and Tuxedo services. WTC allows WebLogic Server clients to invoke Tuxedo services and Tuxedo clients to invoke EJBs in response to a service request.
Database Systems Handbook BY: MUHAMMAD SHARIF 415 Oracle's service oriented architecture (SOA)
Database Systems Handbook BY: MUHAMMAD SHARIF 416 Oracle Fusion Applications Architecture Oracle offers three distinct products as part of the Oracle WebLogic Server 11g application family:  Oracle WebLogic Server Standard Edition (SE)  Oracle WebLogic Server Enterprise Edition (EE)  Oracle WebLogic Suite Oracle WebLogic 11g Server Standard Edition The WebLogic Server Standard Edition (SE) is a full-featured server, but is mainly intended for developers to develop enterprise applications quickly. WebLogic Server SE implements all the Java EE standards and offers management capabilities through the Administration Console.
Database Systems Handbook BY: MUHAMMAD SHARIF 417 Oracle WebLogic 11g Server Enterprise Edition Oracle WebLogic Server EE is designed for mission-critical applications that require high availability and advanced diagnostic capabilities. The EE version contains all the features of the SE version, of course, but in addition supports clustering of servers for high availability and the ability to manage multiple domains, plus various diagnostic tools. Oracle WebLogic Suite 11g Oracle WebLogic Suite offers support for dynamic scale-out applications with features such as in-memory data grid technology and comprehensive management capabilities. It consists of the following components:  Oracle WebLogic Server EE  Oracle Coherence (provides in-memory caching)  Oracle Top Link (provides persistence functionality)  Oracle JRockit (for low-latency, high-throughput transactions)  Enterprise Manager (Admin & Operations)  Development Tools (jdeveloper/eclipse) How weblogic server operates? A bigpicture.
Database Systems Handbook BY: MUHAMMAD SHARIF 418
Database Systems Handbook BY: MUHAMMAD SHARIF 419
Database Systems Handbook BY: MUHAMMAD SHARIF 420
Database Systems Handbook BY: MUHAMMAD SHARIF 421
Database Systems Handbook BY: MUHAMMAD SHARIF 422
Database Systems Handbook BY: MUHAMMAD SHARIF 423
Database Systems Handbook BY: MUHAMMAD SHARIF 424 J2EE Platform
Database Systems Handbook BY: MUHAMMAD SHARIF 425 WebLogic Server contains Java 2 Platform, Enterprise Edition (J2EE) technologies. J2EE is the standard platform for developing multitier enterprise applications based on the Java programming language. The technologies that make up J2EE were developed collaboratively by Sun Microsystems and other software vendors, including BEA Systems. J2EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those components and handles many details of application behavior automatically, without requiring programming. J2EE Platform and WebLogic Server WebLogic Server implements Java 2 Platform, Enterprise Edition (J2EE) version 1.3 technologies. J2EE is the standard platform for developing multi-tier Enterprise applications based on the Java programming language. The technologies that make up J2EE were developed collaboratively by Sun Microsystems and other software vendors, including BEA Systems. WebLogic Server J2EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. Note: Because J2EE is backward compatible, you can still run J2EE 1.3 applications on WebLogic Server versions 7.x and later. What Are WebLogic Server J2EE Applications and Modules? A BEA WebLogic ServerTM J2EE application consists of one of the following modules or applications running on WebLogic Server: Web application modules—HTML pages, servlets, JavaServer Pages, and related files. See Web Application Modules. Enterprise Java Beans (EJB) modules—entity beans, session beans, and message-driven beans. See Enterprise JavaBean Modules. Connector modules—resource adapters. See Connector Modules. Enterprise applications—Web application modules, EJB modules, and resource adapters packaged into an application. See Enterprise Applications. Web Application Modules A Web application on WebLogic Server includes the following files: At least one servlet or JSP, along with any helper classes. A web.xml deployment descriptor, a J2EE standard XML document that describes the contents of a WAR file. Optionally, a weblogic.xml deployment descriptor, an XML document containing WebLogic Server-specific elements for Web applications. A Web application can also include HTML and XML pages with supporting files such as images and multimedia files. Servlets Servlets are Java classes that execute in WebLogic Server, accept a request from a client, process it, and optionally return a response to the client. An HttpServlet is most often used to generate dynamic Web pages in response to Web browser requests. JavaServer Pages JavaServer Pages (JSPs) are Web pages coded with an extended HTML that makes it possible to embed Java code in a Web page. JSPs can call custom Java classes, known as tag libraries, using HTML-like tags. The appc compiler compiles JSPs and translates them into servlets. WebLogic Server automatically compiles JSPs if the servlet class file is not present or is older than the JSP source file. See Using Ant Tasks to Create Compile Scripts. You can also precompile JSPs and package the servlet class in a Web archive (WAR) file to avoid compiling in the server. Servlets and JSPs may require additional helper classes that must also be deployed with the Web application. Overview of WebLogic Resource Types WebLogic resources are hierarchical. Therefore, the level at which you define security roles and security policies is up to you. For example, you can define security roles and security policies for an entire Enterprise Application (EAR),
Database Systems Handbook BY: MUHAMMAD SHARIF 426 an Enterprise JavaBean (EJB) JAR containing multiple EJBs, a particular EJB within that JAR, or a single method within that EJB. Administrative Resources An Administrative resource is a type of WebLogic resource that allows users to perform administrative tasks. Examples of Administrative resources include the WebLogic Server Administration Console, the weblogic.Admin tool, and MBean APIs. Administrative resources are limited in scope. Currently, you can only secure the User Lockout operation on an Administrative resource using the WebLogic Server Administration Console. This operation provides compatibility with WebLogic Server 6.x., and allows users who meet the security requirements to unlock users who have been locked out of their accounts. For more information about user lockout, see Protecting User Accounts in Managing WebLogic Security. Application Resources An Application resource is a type of WebLogic resource that represents an Enterprise Application, packaged as an EAR (Enterprise Application aRchive) file. Unlike the other types of WebLogic resources, the hierarchy of an Application resource is a mechanism for containment, rather than a type hierarchy. You secure an Application resource when you want to protect multiple WebLogic resources that constitute the Enterprise Application (for example, EJB resources, URL resources, and Web Service resources). In other words, securing an Enterprise Application will cause all the WebLogic resources within that application to inherit its security configuration. You can also secure, on an individual basis, the WebLogic resources that constitute an Enterprise Application (EAR). Securing a resource by both means causes the individual security configuration to override the security configuration inherited from the Enterprise Application for that WebLogic resource. Enterprise Information Systems (EIS) Resources A J2EE Connector is a system-level software driver used by an application server such as WebLogic Server to connect to an Enterprise Information System (EIS). BEA supports Connectors developed by EIS vendors and third-party application developers that can be deployed in any application server supporting the Sun Microsystems J2EE Platform Specification, Version 1.3. Connectors, also known as Resource Adapters, contain the Java, and if necessary, the native components required to interact with the EIS. An Enterprise Information System (EIS) resource is a specific type of WebLogic resource that is designed as a Connector. To secure access to an EIS, you create security policies and security roles for all Connectors as a group, or for individual Connectors. Information about securing EIS resources can be found both in this document, and in the Security section of Programming WebLogic J2EE Connectors. Instructions for creating the credential maps for use with EIS resources are available in the Single Sign-On with Enterprise Information Systems section of Managing WebLogic Security. COM Resources WebLogic jCOM is a software bridge that allows bidirectional access between Java/J2EE objects deployed in WebLogic Server, and Microsoft ActiveX components available within the Microsoft Office family of products, Visual Basic and C++ objects, and other Component Object Model/Distributed Component Object Model (COM/DCOM) environments. A COM resource is a specific type of WebLogic resource that is designed as a program component object according to Microsoft's framework. To secure COM components accessed through BEA's bi-directional COM-Java (jCOM) bridging tool, you create security policies and security roles for packages containing multiple COM classes, or for individual COM classes. Information about securing COM resources can be found both in this document and in the Configuring Access Control section of Programming WebLogic jCOM.
Database Systems Handbook BY: MUHAMMAD SHARIF 427 Java DataBase Connectivity (JDBC) Resources A Java DataBase Connectivity (JDBC) resource is a specific type of WebLogic resource that is related to JDBC. To secure JDBC database access, you can create security policies and security roles for all connection pools as a group, individual connection pools, and MultiPools. When you secure individual connection pools, you can choose whether to protect all operations on the connection pool, or protect one of the following operations: Admin—The following methods on the JDBCConnectionPoolRuntimeMBean are invoked as admin operations: clearStatementCache, destroy, disableDroppingUsers, disableFreezingUsers, enable, forceDestroy, forceShutdown, forceSuspend, getProperties, poolExists, resume, shutdown, shutdownHard, shutdownSoft, and suspend. reserve—Applications reserve a connection in the connection pool by looking up the data source that points to the connection pool and then calling getConnection. Note: Giving a user the reserve permission enables them to execute vendor-specific operations on the connection. Depending on the database vendor, some of these operations may have database security implications. Shrink—Shrinks the connection pool to the maximum of the currently reserved connections or the initial size. reset—Resets the database connection pool by shutting down and re-establishing all physical database connections. This also clears the statement cache for each connection in the connection pool. You can only reset a normally running connection pool. Note: If a security policy controls access to a connection pool that is in a MultiPool, access checks are performed at both levels of the JDBC resource hierarchy (once at the MultiPool level, and again at the individual connection pool level). As with all types of WebLogic resources, this double-checking ensures that the most restrictive security policy controls access. Note: If you are an Oracle user, you can also control access to JDBC resources using an Oracle Virtual Private Database (VPD). For more information, see Programming with Oracle Private Virtual Databases in Using Third-Party Drivers with WebLogic Server. ODBC and JDBC details
Database Systems Handbook BY: MUHAMMAD SHARIF 428
Database Systems Handbook BY: MUHAMMAD SHARIF 429
Database Systems Handbook BY: MUHAMMAD SHARIF 430
Database Systems Handbook BY: MUHAMMAD SHARIF 431
Database Systems Handbook BY: MUHAMMAD SHARIF 432
Database Systems Handbook BY: MUHAMMAD SHARIF 433
Database Systems Handbook BY: MUHAMMAD SHARIF 434
Database Systems Handbook BY: MUHAMMAD SHARIF 435 Java Messaging Service (JMS) Resources A Java Messaging Service (JMS) resource is a specific type of WebLogic resource that is related to JMS. To secure JMS destinations, you create security policies and security roles for all destinations (JMS queues and JMS topics) as a group, or an individual destination (JMS queue or JMS topic) on a JMS server. When you secure a particular destination on a JMS server, you can protect all operations on the destination, or protect one of the following operations:
Database Systems Handbook BY: MUHAMMAD SHARIF 436 Send—Required to send a message to a queue or a topic. This includes calls to the MessageProducer.send(), QueueSender.send(), and TopicPublisher.publish() methods, as well as the Messaging Bridge. receive—Required to create a consumer on a queue or a topic. This includes calls to the Session.createConsumer(), Session.createDurableSubscriber(), QueueSession.createReceiver(), TopicSession.createSubscriber(), TopicSession.createDurableSubscriber(), Connection.createConnectionConsumer(), Connection.createDurableConnectionConsumer(), QueueConnection.createConnectionConsumer(), TopicConnection.createConnectionConsumer(), and TopicConnection.createDurableConnectionConsumer() methods, as well as the Messaging Bridge and message-driven beans. Browse—Required to view the messages on a queue using the QueueBrowser interface. Java Naming and Directory Interface (JNDI) Resources JNDI provides a common-denominator interface to many existing naming services, such as Lightweight Directory Access Protocol (LDAP) and Domain Name System (DNS). These naming services maintain a set of bindings, which relate names to objects and provide the ability to look up objects by name. JNDI allows the components in distributed applications to locate each other. ===========================END=========================
Database Systems Handbook BY: MUHAMMAD SHARIF 437

Muhammad Sharif DBA full dbms Handbook Database systems.pdf

  • 1.
    Prepared by: ============== Dedication I dedicateall my efforts to my reader who gives me an urge and inspiration to work more. Muhammad Sharif Author
  • 2.
    Database Systems Handbook BY:MUHAMMAD SHARIF 2 CHAPTER 1 INTRODUCTION TO DATABASE AND DATABASE MANAGEMENT SYSTEM CHAPTER 2 DATA TYPES, DATABASE KEYS, SQL FUNCTIONS AND OPERATORS CHAPTER 3 DATA MODELS AND MAPPING TECHNIQUES CHAPTER 4 DISCOVERING BUSINESS RULES AND DATABASE CONSTRAINTS CHAPTER 5 DATABASE DESIGN STEPS AND IMPLEMENTATIONS CHAPTER 6 DATABASE NORMALIZATION AND DATABASE JOINS CHAPTER 7 FUNCTIONAL DEPENDENCIES IN THE DATABASE MANAGEMENT SYSTEM CHAPTER 8 DATABASE TRANSACTION, SCHEDULES, AND DEADLOCKS CHAPTER 9 RELATIONAL ALGEBRA AND QUERY PROCESSING CHAPTER 10 FILE STRUCTURES, INDEXING, AND HASHING CHAPTER 11 DATABASE USERS AND DATABASE SECURITY MANAGEMENT CHAPTER 12 BUSINESS INTELLIGENCE TERMINOLOGIES IN DATABASE SYSTEMS CHAPTER 13 DBMS INTEGRATION WITH BPMS CHAPTER 14 RAID STRUCTURE AND MEMORY MANAGEMENT CHAPTER 15 ORACLE DATABASE FUNDAMENTAL AND ITS ADMINISTRATION CHAPTER 16 DATABASE BACKUPS AND RECOVERY, LOGS MANAGEMENT
  • 3.
    Database Systems Handbook BY:MUHAMMAD SHARIF 3 CHAPTER 17 PREREQUISITES OF STORAGE MANAGEMENT AND ORACLE INSTALLATION CHAPTER 18 ORACLE DATABASE APPLICATIONS DEVELOPMENT USING ORACLE APPLICATION EXPRESS CHAPTER 19 ORACLE WEBLOGIC SERVERS AND ITS CONFIGURATIONS ============================================= Acknowledgments We are grateful to numerous individuals who contributed to the preparation of relational database systems and management, 2nd edition. First, we wish to thank our reviewers for their detailed suggestions and insights, characteristic of their thoughtful teaching style. All glories praises and gratitude to Almighty Allah, who blessed us with a super and unequaled Professor as ‘Brain’.
  • 4.
    Database Systems Handbook BY:MUHAMMAD SHARIF 4 CHAPTER 1 INTRODUCTION TO DATABASE AND DATABASE MANAGEMENT SYSTEM What is Data? Data – The World’s Most Valuable Resource. Data are the raw bits and pieces of information with no context. If I told you, “15, 23, 14, 85,” you would not have learned anything. But I would have given you data. Data are facts that can be recorded, having explicit meaning. Classifcation of Data We can classify data as structured, unstructured, or semi-structured data. 1. Structured data is generally quantitative data, it usually consists of hard numbers or things that can be counted. 2. Unstructured data is generally categorized as qualitative data, and cannot be analyzed and processed using conventional tools and methods. 3. Semi-structured data refers to data that is not captured or formatted in conventional ways. Semi- structured data does not follow the format of a tabular data model or relational databases because it does not have a fixed schema. XML, JSON are semi-structured example. Properties Structured data is generally stored in data warehouses. Unstructured data is stored in data lakes. Structured data requires less storage space while Unstructured data requires more storage space. Examples: Structured data (Table, tabular format, or Excel spreadsheets.csv) Unstructured data (Email and Volume, weather data) Semi-structured data (Webpages, Resume documents, XML)
  • 5.
    Database Systems Handbook BY:MUHAMMAD SHARIF 5 Categories of Data
  • 6.
    Database Systems Handbook BY:MUHAMMAD SHARIF 6 Implicit data is information that is not provided intentionally but gathered from available data streams, either directly or through analysis of explicit data. Explicit data is information that is provided intentionally, for example through surveys and membership registration forms. Explicit data is data that is provided intentionally and taken at face value rather than analyzed or interpreted. Data hacking Method A data breach is a cyber attack in which sensitive, confidential or otherwise protected data has been accessed or disclosed. What is a data item? The basic component of a file in a file system is a data item. What are records? A group of related data items treated as a single unit by an application is called a record. What is the file? A file is a collection of records of a single type. A simple file processing system refers to the first computer-based approach to handling commercial or business applications. Mapping from file system to Relational Database In a relational database, a data item is called a column or attribute; a record is called a row or tuple, and a file is called a table. Major challenges from file system to database movements 1. Data validatin 2. Data integrity 3. Data security 4. Data sharing Details will be written later where needed.
  • 7.
    Database Systems Handbook BY:MUHAMMAD SHARIF 7 What is information? When we organized data that has some meaning, we called information. What is the database?
  • 8.
    Database Systems Handbook BY:MUHAMMAD SHARIF 8 What is Database Application? A database application is a program or group of programs that are used for performing certain operations on the data stored in the database. These operations may contain insertion of data into a database or extracting some data from the database based on a certain condition, updating data in the database. Examples: (GIS/GPS). What is Knowledge? Knowledge = information + application What is Meta Data? The database definition or descriptive information is also stored by the DBMS in the form of a database catalog or dictionary, it is called meta-data. Data that describe the properties or characteristics of end-user data and the context of those data. Information about the structure of the database. Example Metadata for Relation Class Roster catalogs (Attr_Cat(attr_name, rel_name, type, position like 1,2,3, access rights on objects, what is the position of attribute in the relation). Simple definition is data about data. What is Shared Collection? The logical relationship between data. Data inter-linked between data is called a shared collection. It means data is in the repository and we can access it. What is Database Management System (DBMS)? A database management system (DBMS) is a software package or programs designed to define, retrieve, Control, manipulate data, and manage data in a database. What are database systems? A shared collection of logically related data (comprises entities, attributes, and relationships), is designed to meet the information needs of the organization. The database and DBMS software together is called a database system. Components of a Database Environment 1. Hardware (Server), 2. Software (DBMS), 3. Data and Meta-Data, 4. Procedure (Govern the design of database) 5. Resources (Who Administer database) History of Databases From 1970 to 1972, E.F. Codd published a paper proposed using a relational database model. RDBMS is originally based on E.F. Codd's relational model invention. Before DBMS, there was a file-based system in the era the 1950s.
  • 9.
    Database Systems Handbook BY:MUHAMMAD SHARIF 9 Evolution of Database Systems  Flat files - 1960s - 1980s  Hierarchical – 1970s - 1990s  Network – 1970s - 1990s  Relational – 1980s - present  Object-oriented – 1990s - present  Object-relational – 1990s - present  Data warehousing – 1980s - present  Web-enabled – 1990s – present Here, are the important landmarks from evalution of database systems  1960 – Charles Bachman designed the first DBMS system  1970 – Codd introduced IBM’S Information Management System (IMS)  1976- Peter Chen coined and defined the Entity-relationship model also known as the ER model  1980 – Relational Model becomes a widely accepted database component  1985- Object-oriented DBMS develops.  1990- Incorporation of object-orientation in relational DBMS.  1991- Microsoft MS access, a personal DBMS and that displaces all other personal DBMS products.  1995: First Internet database applications  1997: XML applied to database processing. Many vendors begin to integrate XML into DBMS products. The ANSI-SPARC Database systems Architecture levels 1. The Internal Level (Physical Representation of Data) 2. The Conceptual Level (Holistic Representation of Data) 3. The External Level (User Representation of Data) Internal level store data physically. The conceptual level tells you how the database was structured logically. External level gives you different data views. This is the uppermost level in the database. Database architecture tiers Database architecture has 4 types of tiers. Single tier architecture (for local applications direct communication with database server/disk. It is also called physical centralized architecture.
  • 10.
    Database Systems Handbook BY:MUHAMMAD SHARIF 10 2-tier architecture (basic client-server APIs like ODBC, JDBC, and ORDS are used), Client and disk are connected by APIs called network. 3-tier architecture (Used for web applications, it uses a web server to connect with a database server).
  • 11.
    Database Systems Handbook BY:MUHAMMAD SHARIF 11 Advantages of ANSI-SPARC Architecture The ANSI-SPARC standard architecture is three-tiered, but some books refer 4 tiers. These 4-tiered representation offers several advantages, which are as follows: Its main objective of it is to provide data abstraction. Same data can be accessed by different users with different customized views. The user is not concerned about the physical data storage details. Physical storage structure can be changed without requiring changes in the internal structure of the database as well as users view. The conceptual structure of the database can be changed without affecting end users. It makes the database abstract. It hides the details of how the data is stored physically in an electronic system, which makes it easier to understand and easier to use for an average user. It also allows the user to concentrate on the data rather than worrying about how it should be stored. Types of databases There are various types of databases used for storing different varieties of data in their respective DBMS data model environment. Each database has data models except NoSQL. One is Enterprise Database Management System that is not included in this figure. I will write details one by one in where appropriate. Sequence of details is not necessary.
  • 12.
    Database Systems Handbook BY:MUHAMMAD SHARIF 12 Parallel database architectures Parallel Database architectures are: 1. Shared-memory 2. Shared-disk 3. Shared-nothing (the most common one) 4. Shared Everything Architecture 5. Hybrid System 6. Non-Uniform Memory Architecture A hierarchical model system is a hybrid of the shared memory system, a shared disk system, and a shared-nothing system. The hierarchical model is also known as Non-Uniform Memory Architecture (NUMA). NUMA uses local and remote memory (Memory from another group); hence it will take a longer time to communicate with each other. In NUMA, were different memory controller is used. S.NO UMA NUMA 1 There are 3 types of buses used in uniform Memory Access which are: Single, Multiple and Crossbar. While in non-uniform Memory Access, There are 2 types of buses used which are: Tree and hierarchical. Advantages of NUMA Improves the scalability of the system. Memory bottleneck (shortage of memory) problem is minimized in this architecture. NUMA machines provide a linear address space, allowing all processors to directly address all memory.
  • 13.
    Database Systems Handbook BY:MUHAMMAD SHARIF 13 Distributed Databases Distributed database system (DDBS) = Database Systems + Communication A set of databases in a distributed system that can appear to applications as a single data source. A distributed DBMS (DDBMS) can have the actual database and DBMS software distributed over many sites, connected by a computer network. Distributed DBMS architectures Three alternative approaches are used to separate functionality across different DBMS-related processes. These alternative distributed architectures are called 1. Client-server, 2. Collaborating server or multi-Server 3. Middleware or Peer-to-Peer  Client-server: Client can send query to server to execute. There may be multiple server process. The two different client-server architecture models are: 1. Single Server Multiple Client 2. Multiple Server Multiple Client Client Server architecture layers 1. Presentation layer 2. Logic layer 3. Data layer Presentation layer The basic work of this layer provides a user interface. The interface is a graphical user interface. The graphical user interface is an interface that consists of menus, buttons, icons, etc. The presentation tier presents information related to such work as browsing, sales purchasing, and shopping cart contents. It attaches with other tiers by computing results to the browser/client tier and all other tiers in the network. Its other name is external layer. Logic layer The logical tier is also known as the data access tier and middle tier. It lies between the presentation tier and the data tier. it controls the application’s functions by performing processing. The components that build this layer exist on the server and assist the resource sharing these components also define the business rules like different government legal rules, data rules, and different business algorithms which are designed to keep data structure consistent. This is also known as conceptual layer. Data layer The 3-Data layer is the physical database tier where data is stored or manipulated. It is internal layer of database management system where data stored.  Collaborative/Multi server: This is an integrated database system formed by a collection of two or more autonomous database systems. Multi-DBMS can be expressed through six levels of schema: 1. Multi-database View Level − Depicts multiple user views comprising subsets of the integrated distributed database. 2. Multi-database Conceptual Level − Depicts integrated multi-database that comprises global logical multi- database structure definitions. 3. Multi-database Internal Level − Depicts the data distribution across different sites and multi-database to local data mapping. 4. Local database View Level − Depicts a public view of local data. 5. Local database Conceptual Level − Depicts local data organization at each site. 6. Local database Internal Level − Depicts physical data organization at each site.
  • 14.
    Database Systems Handbook BY:MUHAMMAD SHARIF 14 There are two design alternatives for multi-DBMS − 1. A model with a multi-database conceptual level. 2. Model without multi-database conceptual level.  Peer-to-Peer: Architecture model for DDBMS, In these systems, each peer acts both as a client and a server for imparting database services. The peers share their resources with other peers and coordinate their activities. Its scalability and flexibility is growing and shrinking. All nodes have the same role and functionality. Harder to manage because all machines are autonomous and loosely coupled. This architecture generally has four levels of schemas: 1. Global Conceptual Schema − Depicts the global logical view of data. 2. Local Conceptual Schema − Depicts logical data organization at each site. 3. Local Internal Schema − Depicts physical data organization at each site. 4. Local External Schema − Depicts user view of data Example of Peer-to-peer architecture Types of homogeneous distributed database Autonomous − Each database is independent and functions on its own. They are integrated by a controlling application and use message passing to share data updates.
  • 15.
    Database Systems Handbook BY:MUHAMMAD SHARIF 15 Non-autonomous − Data is distributed across the homogeneous nodes and a central or master DBMS coordinates data updates across the sites. Autonomous databases 1. Autonomous Transaction Processing - Serverless 2. Autonomous Transaction Processing - Dedicated 3. Autonomous data warehourse processing - Analytics Serverless is a simple and elastic deployment choice. Oracle autonomously operates all aspects of the database lifecycle from database placement to backup and updates. Dedicated is a private cloud in public cloud deployment choice. A completely dedicated compute, storage, network, and database service for only a single tenant. Autonomous transaction processing: Architecture Heterogeneous Distributed Databases (Dissimilar schema for each site database, it can be any variety of dbms, relational, network, hierarchical, object oriented) Types of Heterogeneous Distributed Databases 1. Federated − The heterogeneous database systems are independent and integrated so that they function as a single database system. 2. Un-federated − The database systems employ a central coordinating module In a heterogeneous distributed database, different sites have different operating systems, DBMS products, and data models. Parameters at which distributed DBMS architectures developed DDBMS architectures are generally developed depending on three parameters: 1. Distribution − It states the physical distribution of data across the different sites. 2. Autonomy − It indicates the distribution of control of the database system and the degree to which each constituent DBMS can operate independently. 3. Heterogeneity − It refers to the uniformity or dissimilarity of the data models, system components, and databases.
  • 16.
    Database Systems Handbook BY:MUHAMMAD SHARIF 16 Note: The Semi Join and Bloom Join are two techniques/data fetching method in distributed databases. Some Popular databases and respective data models  Native XML Databases We were not surprised that the number of start-up companies as well as some established data management companies determined that XML data would be best managed by a DBMS that was designed specifically to deal with semi-structured data — that is, a native XML database.  Conceptual Database This step is related to the modeling in the Entity-Relationship (E/R) Model to specify sets of data called entities, relations among them called relationships and cardinality restrictions identified by letters N and M, in this case, the many-many relationships stand out.  Conventional Database This step includes Relational Modeling where a mapping from MER to relations using rules of mapping is carried out. The posterior implementation is done in Structured Query Language (SQL).  Non-Conventional database This step involves Object-Relational Modeling which is done by the specification in Structured Query Language. In this case, the modeling is related to the objects and their relationships with the Relational Model.  Traditional database  Temporal database  Conventional Databases  NewSQL Database  Autonomous database  Cloud database  Spatiotemporal  Enterprise Database Management System  Google Cloud Firestore  Couchbase  Memcached, Coherence (key-value store)  HBase, Big Table, Accumulo (Tabular)  MongoDB, CouchDB, Cloudant, JSON-like (Document-based)  Neo4j (Graph Database)  Redis (Data model: Key value)
  • 17.
    Database Systems Handbook BY:MUHAMMAD SHARIF 17  Elasticsearch (Data model: search engine)  Microsoft access (Data model: relational)  Cassandra (Data model: Wide column)  MariaDB (Data model: Relational)  Splunk (Data model: search engine)  Snowflake (Data model: Relational)  Azure SQL Server Database (Relational)  Amazon DynamoDB (Data model: Multi-Model)  Hive (Data model: Relational) Non-relational (NoSQL) Data model BASE Model: Basically Available – Rather than enforcing immediate consistency, BASE-modelled NoSQL databases will ensure the availability of data by spreading and replicating it across the nodes of the database cluster. Soft State – Due to the lack of immediate consistency, data values may change over time. The BASE model breaks off with the concept of a database that enforces its consistency, delegating that responsibility to developers. Eventually Consistent – The fact that BASE does not enforce immediate consistency does not mean that it never achieves it. However, until it does, data reads are still possible (even though they might not reflect the reality). Just as SQL databases are almost uniformly ACID compliant, NoSQL databases tend to conform to BASE principles. NewSQL Database NewSQL is a class of relational database management systems that seek to provide the scalability of NoSQL systems for online transaction processing (OLTP) workloads while maintaining the ACID guarantees of a traditional database system. Examples and properties of Relational Non-Relational Database: The term NewSQL categorizes databases that are the combination of relational models with the advancement in scalability, and flexibility with types of data. These databases focus on the features which are not present in NoSQL, which offers a strong consistency guarantee. This covers two layers of data one relational one and a key-value store.
  • 18.
    Database Systems Handbook BY:MUHAMMAD SHARIF 18 Sr. No NoSQL NewSQL 1. NoSQL is schema-less or has no fixed schema/unstructured schema. So BASE Data model exists in NoSQL. NoSQL is a schema-free database. NewSQL is schema-fixed as well as a schema- free database. 2. It is horizontally scalable. It is horizontally scalable. 3. It possesses automatically high availability. It possesses built-in high availability. 4. It supports cloud, on-disk, and cache storage. It fully supports cloud, on-disk, and cache storage. It may cause a problem with in-memory architecture for exceeding volumes of data. 5. It promotes CAP properties. It promotes ACID properties. 6. Online Transactional Processing is not supported. Online Transactional Processing and implementation to traditional relational databases are fully supported 7. There are low-security concerns. There are moderate security concerns. 8. Use Cases: Big Data, Social Network Applications, and IoT. Use Cases: E-Commerce, Telecom industry, and Gaming. 9. Examples: DynamoDB, MongoDB, RaveenDB etc. Examples: VoltDB, CockroachDB, NuoDB etc. Advantages of Database management systems: It supports a logical view (schema, subschema), It supports a physical view (access methods, data clustering), It supports data definition language, data manipulation language to manipulate data, It provides important utilities, such as transaction management and concurrency control, data integrity, crash recovery, and security. Relational database systems, the dominant type of systems for well-formatted business databases, also provide a greater degree of data independence. The motivations for using databases rather than files include greater availability to a diverse set of users, integration of data for easier access to and updating of complex transactions, and less redundancy of data. Data consistency, Better data security
  • 19.
    Database Systems Handbook BY:MUHAMMAD SHARIF 19 CHAPTER 2 DATA TYPES, DATABASE KEYS, SQL FUNCTIONS AND OPERATORS Data types Overview BINARY_FLOAT BINARY_DOUBLE 32-bit floating point number. This data type requires 4 bytes. 64-bit floating point number. This data type requires 8 bytes. There are two classes of date and time-related data types in PL/SQL − 1. Datetime datatypes 2. Interval Datatypes The DateTime datatypes are −  Date  Timestamp  Timestamp with time zone  Timestamp with local time zone The interval datatypes are −  Interval year to month  Interval day to second If max_string_size = extended 32767 bytes or characters If max_string_size = standard Number(p,s) data type 4000 bytes or characters Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127. Both precision and scale are in decimal digits. A number value requires from 1 to 22 bytes. Character data types The character data types represent alphanumeric text. PL/SQL uses the SQL character data types such as CHAR, VARCHAR2, LONG, RAW, LONG RAW, ROWID, and UROWID. CHAR(n) is a fixed-length character type whose length is from 1 to 32,767 bytes. VARCHAR2(n) is varying length character data from 1 to 32,767 bytes. Data Type Maximum Size in PL/SQL Maximum Size in SQL CHAR 32,767 bytes 2,000 bytes NCHAR 32,767 bytes 2,000 bytes RAW 32,767 bytes 2,000 bytes VARCHAR2 32,767 bytes 4,000 bytes ( 1 char = 1 byte) NVARCHAR2 32,767 bytes 4,000 bytes LONG 32,760 bytes 2 gigabytes (GB) – 1
  • 20.
    Database Systems Handbook BY:MUHAMMAD SHARIF 20 LONG RAW 32,760 bytes 2 GB BLOB 8-128 terabytes (TB) (4 GB - 1) database_block_size CLOB 8-128 TB (Used to store large blocks of character data in the database.) (4 GB - 1) database_block_size NCLOB 8-128 TB ( Used to store large blocks of NCHAR data in the database.) (4 GB - 1) database_block_size Scalar No Fixed range Single values with no internal components, such as a NUMBER, DATE, or BOOLEAN. Numeric values on which arithmetic operations are performed like Number(7,2). Stores dates in the Julian date format. Logical values on which logical operations are performed. NUMBER Data Type No fixed Range DEC, DECIMAL, DOUBLE PRECISION, FLOAT, INTEGER, INT, NUMERIC, REAL, SMALLINT Type Size in Memory Range of Values Byte 1 byte 0 to 255 Boolean 2 bytes True or False Integer 2 bytes –32,768 to 32,767 Long (long integer) 4 bytes –2,147,483,648 to 2,147,483,647 Single (single-precision real) 4 bytes Approximately –3.4E38 to 3.4E38 Double (double-precision real) 8 bytes Approximately –1.8E308 to 4.9E324 Currency (scaled integer) 8 bytes Approximately – 922,337,203,685,477.5808 to 922,337,203,685,477.5807 Date 8 bytes 1/1/100 to 12/31/9999 Object 4 bytes Any Object reference
  • 21.
    Database Systems Handbook BY:MUHAMMAD SHARIF 21 String Variable length: 10 bytes + string length; Fixed length: string length Variable length: <= about 2 billion (65,400 for Win 3.1) Fixed length: up to 65,400 Variant 16 bytes for numbers 22 bytes + string length The Concept of Signed and Unsigned Integers
  • 22.
    Database Systems Handbook BY:MUHAMMAD SHARIF 22 Organization of bits in a 16-bit signed short integer. Thus, a signed number that stores 16 bits can contain values ranging from –32,768 through 32,767, and one that stores 8 bits can contain values ranging from –128 through 127. Data Types can be further divided as:  Primitive  Non-Primitive Primitive data types are pre-defined whereas non-primitive data types are user-defined. Data types like byte, int, short, float, long, char, bool, etc are called Primitive data types. Non-primitive data types include class, enum, array, delegate, etc. User-Defined Datatypes There are two categories of user-defined datatypes:  Object types  Collection types A user-defined data type (UDT) is a data type that derived from an existing data type. You can use UDTs to extend the built-in types already available and create your own customized data types. There are six user-defined types: 1. Distinct type 2. Structured type 3. Reference type 4. Array type 5. Row type 6. Cursor type Here the data types are in different groups:  Exact Numeric: bit, Tinyint, Smallint, Int, Bigint, Numeric, Decimal, SmallMoney, Money.  Approximate Numeric: float, real  Data and Time: DateTime, Smalldatatime, date, time, Datetimeoffset, Datetime2  Character Strings: char, varchar, text  Unicode Character strings: Nchar, Nvarchar, Ntext  Binary strings: binary, Varbinary, image  Other Data types: sql_variant, timestamp, Uniqueidentifier, XML  CLR data types: hierarchyid  Spatial data types: geometry, geography
  • 23.
    Database Systems Handbook BY:MUHAMMAD SHARIF 23 Abstract Data Types in OracleOne of the shortcomings of the Oracle 7 database was the limited number of intrinsic data types. Abstract Data Types An Abstract Data Type (ADT) consists of a data structure and subprograms that manipulate the data. The variables that form the data structure are called attributes. The subprograms that manipulate the attributes are called methods. ADTs are stored in the database and instances of ADTs can be stored in tables and used as PL/SQL variables. ADTs let you reduce complexity by separating a large system into logical components, which you can reuse. In the static data dictionary view. ANSI SQL Datat type convertions with Oracle Data type
  • 24.
    Database Systems Handbook BY:MUHAMMAD SHARIF 24 Database Key A key is a field of a table that identifies the tuple in that table.  Super key An attribute or a set of attributes that uniquely identifies a tuple within a relation.  Candidate key A super key such that no proper subset is a super key within the relation. Contains no unique subset (irreducibility). Possibly many candidate keys (specified using UNIQUE), one of which is chosen as the primary key. PRIMARY KEY (sid), UNIQUE (id, grade)) A candidate can be unique but its value can be changed.  Natural key PK in OLTP. It may be a PK in OLAP. A natural key (also known as business key or domain key) is a type of unique key in a database formed of attributes that exist and are used in the external world outside the database like natural key (SSN column)  Composite key or concatenate key A primary key that consists of two or more attributes is known as a composite key.  Primary key The candidate key is selected to identify tuples uniquely within a relation. Should remain constant over the life of the tuple. PK is unique, Not repeated, not null, not change for life. If the primary key is to be changed. We will drop the entity of the table, and add a new entity, In most cases, PK is used as a foreign key. You cannot change the value. You first delete the child, so that you can modify the parent table.  Minimal Super Key All super keys can't be primary keys. The primary key is a minimal super key. KEY is a minimal SUPERKEY, that is, a minimized set of columns that can be used to identify a single row.  Foreign key An attribute or set of attributes within one relation that matches the candidate key of some (possibly the same) relation. Can you add a non-key as a foreign key? Yes, the minimum condition is it should be unique. It should be candidate key.  Composite Key The composite key consists of more than one attribute. COMPOSITE KEY is a combination of two or more columns that uniquely identify rows in a table. The combination of columns guarantees uniqueness, though individually uniqueness is not guaranteed. Hence, they are combined to uniquely identify records in a table. You can you composite key as PK but the Composite key will go to other tables as a foreign key.  Alternate key A relation can have only one primary key. It may contain many fields or a combination of fields that can be used as the primary key. One field or combination of fields is used as the primary key. The fields or combinations of fields that are not used as primary keys are known as candidate keys or alternate keys.
  • 25.
    Database Systems Handbook BY:MUHAMMAD SHARIF 25  Sort Or control key A field or combination of fields that are used to physically sequence the stored data is called a sort key. It is also known s the control key.  Alternate key An alternate key is a secondary key it can be simple to understand an example: Let's take an example of a student it can contain NAME, ROLL NO., ID, and CLASS.  Unique key A unique key is a set of one or more than one field/column of a table that uniquely identifies a record in a database table. You can say that it is a little like a primary key but it can accept only one null value and it cannot have duplicate values. The unique key and primary key both provide a guarantee for uniqueness for a column or a set of columns. There is an automatically defined unique key constraint within a primary key constraint. There may be many unique key constraints for one table, but only one PRIMARY KEY constraint for one table.  Artificial Key The key created using arbitrarily assigned data are known as artificial keys. These keys are created when a primary key is large and complex and has no relationship with many other relations. The data values of the artificial keys are usually numbered in a serial order. For example, the primary key, which is composed of Emp_ID, Emp_role, and Proj_ID, is large in employee relations. So it would be better to add a new virtual attribute to identify each tuple in the relation uniquely. Rownum and rowid are artificial keys. It should be a number or integer, numeric. Format of Rowid of :  Surrogate key SURROGATE KEYS is An artificial key that aims to uniquely identify each record and is called a surrogate key. This kind of partial key in DBMS is unique because it is created when you don’t have any natural primary key. You can't insert values of the surrogate key. Its value comes from the system automatically. No business logic in key so no changes based on business requirements Surrogate keys reduce the complexity of the composite key. Surrogate keys integrate the extract, transform, and load in DBs.  Compound Key COMPOUND KEY has two or more attributes that allow you to uniquely recognize a specific record. It is possible that each column may not be unique by itself within the database.
  • 26.
    Database Systems Handbook BY:MUHAMMAD SHARIF 26 Database Keys and Its Meta data’s description: Operators < > or != Not equal to like salary <>500.
  • 27.
    Database Systems Handbook BY:MUHAMMAD SHARIF 27 Wildcards and Unions Operators LIKE operator is used to filter the result set based on a string pattern. It is always used in the WHERE clause. Wildcards are used in SQL to match a string pattern. A wildcard character is used to substitute one or more characters in a string. Wildcard characters are used with the LIKE operator. There are two wildcards often used in conjunction with the LIKE operator: 1. The percent sign (%) represents zero, one, or multiple characters 2. The underscore sign (_) represents one, a single character Two maindifferences between like, Ilike Operator: 1. LIKE is case-insensitive whereas iLIKE is case-sensitive. 2. LIKE is a standard SQL operator, whereas ILIKE is only implemented in certain databases such as PostgreSQL and Snowflake. To ignore case when you're matching values, you can use the ILIKE command: Example 1: SELECT * FROM tutorial.billboard_top_100_year_en WHERE "group" ILIKE 'snoop%' Example 2: SELECT FROM Customers WHERE City LIKE 'ber%'; SQL UNION clause is used to select distinct values from the tables. SQL UNION ALL clause used to select all values including duplicates from the tables The UNION operator is used to combine the result-set of two or more SELECT statements. Every SELECT statement within UNION must have the same number of columns
  • 28.
    Database Systems Handbook BY:MUHAMMAD SHARIF 28 The columns must also have similar data types The columns in every SELECT statement must also be in the same order EXCEPT or MINUS These are the records that exist in Dataset1 and not in Dataset2. Each SELECT statement within the EXCEPT query must have the same number of fields in the result sets with similar data types. The difference is that EXCEPT is available in the PostgreSQL database while MINUS is available in MySQL and Oracle. There is absolutely no difference between the EXCEPT clause and the MINUS clause. IN operator allows you to specify multiple values in a WHERE clause. The IN operator is a shorthand for multiple OR conditions. ANY operator Returns a Boolean value as a result Returns true if any of the subquery values meet the condition . ANY means that the condition will be true if the operation is true for any of the values in the range. NOT IN can also take literal values whereas not existing need a query to compare the results. SELECT CAT_ID FROM CATEGORY_A WHERE CAT_ID NOT IN (SELECT CAT_ID FROM CATEGORY_B) NOT EXISTS SELECT A.CAT_ID FROM CATEGORY_A A WHERE NOT EXISTS (SELECT B.CAT_ID FROM CATEGORY_B B WHERE B.CAT_ID = A.CAT_ID) NOT EXISTS could be good to use because it can join with the outer query & can lead to usage of the index if the criteria use an indexed column. EXISTS AND NOT EXISTS are typically used in conjuntion with a correlated nested query. The result of EXISTS is a boolean value, TRUE if the nested query ressult contains at least one tuple, or FALSE if the nested query result contains no tuples Supporting operators in different DBMS environments: Keyword Database System TOP SQL Server, MS Access LIMIT MySQL, PostgreSQL, SQLite FETCH FIRST Oracle
  • 29.
    Database Systems Handbook BY:MUHAMMAD SHARIF 29 But 10g onward TOP Clause no longer supported replace with ROWNUM clause. SQL FUNCTIONS Types of Multiple Row Functions in Oracle (Aggrigate functions) AVG: It retrieves the average value of the number of rows in a table by ignoring the null value COUNT: It retrieves the number of rows (count all selected rows using *, including duplicates and rows with null values) MAX: It retrieves the maximum value of the expression, ignores null values MIN: It retrieves the minimum value of the expression, ignores null values SUM: It retrieves the sum of values of the number of rows in a table, it ignores null values Example:
  • 30.
    Database Systems Handbook BY:MUHAMMAD SHARIF 30 Explanation of Single Row Functions
  • 31.
    Database Systems Handbook BY:MUHAMMAD SHARIF 31 Examples of date functions
  • 32.
  • 33.
    Database Systems Handbook BY:MUHAMMAD SHARIF 33 CHARTOROWID converts a value from CHAR, VARCHAR2, NCHAR, or NVARCHAR2 datatype to ROWID datatype. This function does not support CLOB data directly. However, CLOBs can be passed in as arguments through implicit data conversion. For assignments, Oracle can automatically convert the following: VARCHAR2 or CHAR to MLSLABEL MLSLABEL to VARCHAR2 VARCHAR2 or CHAR to HEX HEX to VARCHAR2
  • 34.
    Database Systems Handbook BY:MUHAMMAD SHARIF 34 Example of Conversion Functions
  • 35.
  • 36.
  • 37.
  • 38.
    Database Systems Handbook BY:MUHAMMAD SHARIF 38 Subquery Concept
  • 39.
    Database Systems Handbook BY:MUHAMMAD SHARIF 39 END
  • 40.
    Database Systems Handbook BY:MUHAMMAD SHARIF 40 CHAPTER 3 DATA MODELS AND MAPPING TECHNIQUES Overview of data modeling in DBMS The semantic data model is a method of structuring data to represent it in a specific logical way. Types of Data Models in history: Data abstraction Process of hiding (suppressing) unnecessary details so that the high-level concept can be made more visible. A data model is a relatively simple representation, usually graphical, of more complex real-world data structures. Data model Schema and Instance
  • 41.
    Database Systems Handbook BY:MUHAMMAD SHARIF 41 Database Instance is the data which is stored in the database at a particular moment is called an instance of the database. Also called database state (or occurrence or snapshot). The content of the database, instance is also called an extension. The term instance is also applied to individual database components, E.g., record instance, table instance, entity instance Types of Instances Initial Database Instance: Refers to the database instance that is initially loaded into the system. Valid Database Instance: An instance that satisfies the structure and constraints of the database. The database instance changes every time the database is updated. Database Schema is the overall design or skeleton structure of the database. It represents the logical view, visual diagram having relationals of objects of the entire database. A database schema can be represented by using a visual diagram. That diagram shows the database objects and their relationship with each other. A schema contains schema objects like table, foreign key, primary key, views, columns, data types, stored procedure, etc. A database schema is designed by the database designers to help programmers whose software will interact with the database. The process of database creation is called data modeling. Relational Schema definition Relational schema refers to the meta-data that describes the structure of data within a certain domain . It is the blueprint of a database that outlines the way any database will have some number of constraints that must be applied to ensure correct data (valid states). Database Schema definition A relational schema may also refer to as a database schema. A database schema is the collection of relation schemas for a whole database. A relational or Database schema is a collection of meta-data. Database schema describes the structure and constraints of data represented in a particular domain . A Relational schema can be described as a blueprint of a database that outlines the way data is organized into tables. This blueprint will not contain any type of data. In a relational schema, each tuple is divided into fields called Domain. Other definitions: The overall design of the database.Structure of database, Schema is also called intension. Types of Schemas w.r.t Database DBMS Schemas: Logical/Conceptual/physical schema/external schema Data warehouse/multi-dimensional schemas: Snowflake/star OLAP Schemas: Fact constellation schema/galaxy ANSI-SPARC schema architecture External Level: View level, user level, external schema, Client level. Conceptual Level: Community view, ERD Model, conceptual schema, server level, Conceptual (high-level, semantic) data models, entity-based or object-based data models, what data is stored .and relationships, it’s deal Logical data independence (External/conceptual mapping) logical schema: It is sometimes called conceptual schema too (server level), Implementation (representational) data models. Specific DBMS level modeling. Internal Level: Physical representation, Internal schema, Database level, Low level. It deals with how data is stored in the database and Physical data independence (Conceptual/internal mapping) Physical data level: Physical storage, physical schema, some-time deals with internal schema. It is detailed in administration manuals. Data independence IT is the ability to make changes in either the logical or physical structure of the database without requiring reprogramming of application programs.
  • 42.
    Database Systems Handbook BY:MUHAMMAD SHARIF 42 Data Independence types Logical data independence=>Immunity of external schemas to changes in the conceptual schema Physical data independence=>Immunity of the conceptual schema to changes in the internal schema. There are two types of mapping in the database architecture Conceptual/ Internal Mapping The Conceptual/ Internal Mapping lies between the conceptual level and the internal level. Its role is to define the correspondence between the records and fields of the conceptual level and files and data structures of the internal level.
  • 43.
    Database Systems Handbook BY:MUHAMMAD SHARIF 43 External/Conceptual Mapping The external/Conceptual Mapping lies between the external level and the Conceptual level. Its role is to define the correspondence between a particular external and conceptual view. Detail description When a schema at a lower level is changed, only the mappings. between this schema and higher-level schemas need to be changed in a DBMS that fully supports data independence. The higher-level schemas themselves are unchanged. Hence, the application programs need not be changed since they refer to the external schemas. For example, the internal schema may be changed when certain file structures are reorganized or new indexes are created to improve database performance. Data abstraction Data abstraction makes complex systems more user-friendly by removing the specifics of the system mechanics. The conceptual data model has been most successful as a tool for communication between the designer and the end user during the requirements analysis and logical design phases. Its success is because the model, using either ER or UML, is easy to understand and convenient to represent. Another reason for its effectiveness is that it is a top- down approach using the concept of abstraction. In addition, abstraction techniques such as generalization provide useful tools for integrating end user views to define a global conceptual schema. These differences show up in conceptual data models as different levels of abstraction; connectivity of relationships (one-to-many, many-to-many, and so on); or as the same concept being modeled as an entity, attribute, or relationship, depending on the user’s perspective. Techniques used for view integration include abstraction, such as generalization and aggregation to create new supertypes or subtypes, or even the introduction of new relationships. The higher-level abstraction, the entity cluster, must maintain the same relationships between entities inside and outside the entity cluster as those that occur between the same entities in the lower-level diagram. ERD, EER terminology is not only used in conceptual data modeling but also in artificial intelligence literature when discussing knowledge representation (KR). The goal of KR techniques is to develop concepts for accurately modeling some domain of knowledge by creating an ontology. Ontology is the fundamental part of Semantic Web. The goal of World Wide Web Consortium (W3C) is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL- DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. Ontology engineering is an important aspect of semantic web vision to attain the meaningful representation of data. Although various techniques exist for the creation of ontology, most of the methods involve
  • 44.
    Database Systems Handbook BY:MUHAMMAD SHARIF 44 the number of complex phases, scenario-dependent ontology development, and poor validation of ontology. This research work presents a lightweight approach to build domain ontology using Entity Relationship (ER) model. We now discuss four abstraction concepts that are used in semantic data models, such as the EER model as well as in KR schemes: (1) classification and instantiation, (2) identification, (3) specialization and generalization, and (4) aggregation and association. One ongoing project that is attempting to allow information exchange among computers on the Web is called the Semantic Web, which attempts to create knowledge representation models that are quite general in order to allow meaningful information exchange and search among machines. One commonly used definition of ontology is a specification of a conceptualization. In this definition, a conceptualization is the set of concepts that are used to represent the part of reality or knowledge that is of interest to a community of users. Types of Abstractions Classification: A is a member of class B Aggregation: B, C, D Are Aggregated Into A, A Is Made Of/Composed Of B, C, D, Is-Made-Of, Is- Associated-With, Is-Part-Of, Is-Component-Of. Aggregation is an abstraction through which relationships are treated as higher-level entities. Generalization: B,C,D can be generalized into a, b is-a/is-an a, is-as-like, is-kind-of. Category or Union: A category represents a single superclass or subclass relationship with more than one superclass. Specialization: A can be specialized into B, C, DB, C, or D (special cases of A) Has-a, Has-A, Has An, Has-An approach is used in the specialization Composition: IS-MADE-OF (like aggregation) Identification: IS-IDENTIFIED-BY UML Diagrams Notations UML stands for Unified Modeling Language. ERD stands for Entity Relationship Diagram. UML is a popular and standardized modeling language that is primarily used for object-oriented software. Entity-Relationship diagrams are used in structured analysis and conceptual modeling. Object-oriented data models are typically depicted using Unified Modeling Language (UML) class diagrams. Unified Modeling Language (UML) is a language based on OO concepts that describes a set of diagrams and symbols that can be used to graphically model a system. UML class diagrams are used to represent data and their relationships within the larger UML object-oriented system’s modeling language.
  • 45.
    Database Systems Handbook BY:MUHAMMAD SHARIF 45 Associations UML uses Boolean attributes instead of unary relationships but allows relationships of all other entities. Optionally, each association may be given at most one name. Association names normally start with a capital letter. Binary associations are depicted as lines between classes. Association lines may include elbows to assist with layout or when needed (e.g., for ring relationships). ER Diagram and Class Diagram Synchronization Sample Supporting the synchronization between ERD and Class Diagram. You can transform the system design from the data model to the Class model and vice versa, without losing its persistent logic. Conversions of Terminology of UML and ERD
  • 46.
    Database Systems Handbook BY:MUHAMMAD SHARIF 46 Relational Data Model and its Main Evolution Inclusion ER Model is the Class diagram of the UML Series.
  • 47.
    Database Systems Handbook BY:MUHAMMAD SHARIF 47 ER Notation Comparison with UML and Their relationship ER Construct Notation Relationships
  • 48.
  • 49.
    Database Systems Handbook BY:MUHAMMAD SHARIF 49  Rest ER Construct Notation Comparison
  • 50.
    Database Systems Handbook BY:MUHAMMAD SHARIF 50 Appropriate Er Model Design Naming Conventions Guideline 1 Nouns => Entity, object, relation, table_name. Verbs => Indicate relationship_types. Common Nouns=> A common noun (such as student and employee) in English corresponds to an entity type in an ER diagram: Proper Nouns=> Proper nouns are entities. e.g. John, Singapore, New York City. Note: A relational database uses relations or two-dimensional tables to store information.
  • 51.
  • 52.
    Database Systems Handbook BY:MUHAMMAD SHARIF 52 Types of Attributes- In ER diagram, attributes associated with an entity set may be of the following types- 1. Simple attributes/atomic attributes/Static attributes 2. Key attribute 3. Unique attributes 4. Stored attributes 5. Prime attributes 6. Derived attributes (DOB, AGE, Oval is a derived attribute) 7. Composite attribute (Address (street, door#, city, town, country)) 8. The multivalued attribute (double ellipse (Phone#, Hobby, Degrees)) 9. Dynamic Attributes 10. Boolean attributes The fundamental new idea in the MOST model is the so-called dynamic attributes. Each attribute of an object class is classified to be either static or dynamic. A static attribute is as usual. A dynamic attribute changes its value with time automatically. Attributes of the database tables which are candidate keys of the database tables are called prime attributes.
  • 53.
    Database Systems Handbook BY:MUHAMMAD SHARIF 53 Symbols of Attributes: The Entity The entity is the basic building block of the E-R data model. The term entity is used in three different meanings or for three different terms and are: Entity type Entity instance Entity set
  • 54.
    Database Systems Handbook BY:MUHAMMAD SHARIF 54 Technical Types of Entity:  Tangible Entity: Tangible Entities are those entities that exist in the real world physically. Example: Person, car, etc.  Intangible Entity: Intangible (Concepts) Entities are those entities that exist only logically and have no physical existence. Example: Bank Account, etc. Major of entity types 1. Strong Entity Type 2. Weak Entity Type 3. Naming Entity 4. Characteristic entities 5. Dependent entities 6. Independent entities Details of entity types An entity type whose instances can exist independently, that is, without being linked to the instances of any other entity type is called a strong entity type. A weak entity can be identified uniquely only by considering the primary key of another (owner) entity. The owner entity set and weak entity set must participate in a one-to-many relationship set (one owner, many weak entities). The weak entity set must have total participation in this identifying relationship set. Weak entities have only a “partial key” (dashed underline), When the owner entity is deleted, all owned weak entities must also be deleted Types Following are some recommendations for naming entity types. Singular nouns are recommended, but still, plurals can also be used Organization-specific names, like a customer, client, owner anything will work Write in capitals, yes, this is something that is generally followed, otherwise will also work. Abbreviations can be used, be consistent. Avoid using confusing abbreviations, if they are confusing for others today, tomorrow they will confuse you too. Database Design Tools Some commercial products are aimed at providing environments to support the DBA in performing database design. These environments are provided by database design tools, or sometimes as part of a more general class of products known as computer-aided software engineering (CASE) tools. Such tools usually have some components, choose from the following kinds. It would be rare for a single product to offer all these capabilities. 1. ER Design Editor 2. ER to Relational Design Transformer 3. FD to ER Design Transformer 4. Design Analyzers ER Modeling Rules to design database Three components: 1. Structural part - set of rules applied to the construction of the database 2. Manipulative part - defines the types of operations allowed on the data 3. Integrity rules - ensure the accuracy of the data
  • 55.
    Database Systems Handbook BY:MUHAMMAD SHARIF 55 Step1: DFD Data Flow Model Data flow diagrams: the most common tool used for designing database systems is a data flow diagram. It is used to design systems graphically and expresses different system detail at different DFD levels. Characteristics DFDs show the flow of data between different processes or a specific system. DFDs are simple and hide complexities. DFDs are descriptive and links between processes describe the information flow. DFDs are focused on the flow of information only. Data flows are pipelines through which packets of information flow. DBMS applications store data as a file. RDBMS applications store data in a tabular form. In the file system approach, there is no concept of data models exists. It mostly consists of different types of files like mp3, mp4, txt, doc, etc. that are grouped into directories on a hard drive. Collection of logical constructs used to represent data structure and relationships within the database. A data flow diagram shows the way information flows through a process or system. It includes data inputs and outputs, data stores, and the various subprocesses the data moves through. Symbols used in DFD Dataflow => Arrow symbol Data store => It is expressed with a rectangle open on the right width and the left width of the rectangle drawn with double lines. Processes => Circle or near squire rectangle DFD-process => Numbered DFD processes circle and rectangle by passing a line above the center of the circle or rectangle To create DFD following steps: 1. Create a list of activities 2. Construct Context Level DFD (external entities, processes) 3. Construct Level 0 DFD (manageable sub-process) 4. Construct Level 1- n DFD (actual data flows and data stores) Types of DFD 1. Context diagram 2. Level 0,1,2 diagrams 3. Detailed diagram 4. Logical DFD 5. Physical DFD Context diagrams are the most basic data flow diagrams. They provide a broad view that is easily digestible but offers little detail. They always consist of a single process and describe a single system. The only process displayed in the CDFDs is the process/system being analyzed. The name of the CDFDs is generally a Noun Phrase.
  • 56.
    Database Systems Handbook BY:MUHAMMAD SHARIF 56 Example Context DFD Diagram In the context level, DFDs no data stores are created. 0-Level DFD The level 0 Diagram in the DFD is used to describe the working of the whole system. Once a context DFD has been created the level zero diagram or level ‘not’ diagram is created. The level zero diagram contains all the apparent details of the system. It shows the interaction between some processes and may include a large number of external entities. At this level, the designer must keep a balance in describing the system using the level 0 diagram. Balance means that he should give proper depth to the level 0 diagram processes. 1-level DFD In 1-level DFD, the context diagram is decomposed into multiple bubbles/processes. In this level, we highlight the main functions of the system and breakdown the high-level process of 0-level DFD into subprocesses. 2-level DFD In 2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the specific/necessary detail about the system’s functioning. Detailed DFDs are detailed enough that it doesn’t usually make sense to break them down further. Logical data flow diagrams focus on what happens in a particular information flow: what information is being transmitted, what entities are receiving that info, what general processes occur, etc. It describes the functionality of the processes that we showed briefly in the Level 0 Diagram. It means that generally detailed DFDS is expressed as the successive details of those processes for which we do not or could not provide enough details. Logical DFD Logical data flow diagram mainly focuses on the system process. It illustrates how data flows in the system. Logical DFD is used in various organizations for the smooth running of system. Like in a Banking software system, it is used to describe how data is moved from one entity to another. Physical DFD Physical data flow diagram shows how the data flow is actually implemented in the system. Physical DFD is more specific and closer to implementation.
  • 57.
    Database Systems Handbook BY:MUHAMMAD SHARIF 57  Conceptual models are (Entity-relationship database model (ERDBD), Object-oriented model (OODBM), Record-based data model)  Implementation models (Types of Record-based logical Models are (Hierarchical database model (HDBM), Network database model (NDBM), Relational database model (RDBM)  Semi-structured Data Model (The semi-structured data model allows the data specifications at places where the individual data items of the same type may have different attribute sets. The Extensible Markup Language, also known as XML, is widely used for representing semi-structured data).
  • 58.
    Database Systems Handbook BY:MUHAMMAD SHARIF 58 Evolution Records of Data model and types
  • 59.
  • 60.
  • 61.
    Database Systems Handbook BY:MUHAMMAD SHARIF 61 ERD Modeling and Database table relationships What is ERD: structure or schema or logical design of database is called Entity-Relationship diagram. Category of relationships Optional relationship Mandatory relationship Types of relationships concerning degree Unary or self or recursive relationship A single entity, recursive, exists between occurrences of the same entity set Binary Two entities are associated in a relationship Ternary A ternary relationship is when three entities participate in the relationship. A ternary relationship is a relationship type that involves many many relationships between three tables. For Example: The University might need to record which teachers taught which subjects in which courses.
  • 62.
    Database Systems Handbook BY:MUHAMMAD SHARIF 62 N-ary N-ary (many entities involved in the relationship) An N-ary relationship exists when there are n types of entities. There is one limitation of the N-ary any entities so it is very hard to convert into an entity, a rational table. A relationship between more than two entities is called an n-ary relationship. Examples of relationships R between two entities E and F Relationship Notations with entities: Because it uses diamonds for relationships, Chen notation takes up more space than Crow’s Foot notation. Chen's notation also requires symbols. Crow’s Foot has a slight learning curve. Chen notation has the following possible cardinality: One-to-One, Many-to-Many, and Many-to-One Relationships One-to-one (1:1) – both entities are associated with only one attribute of another entity One-to-many (1:N) – one entity can be associated with multiple values of another entity Many-to-one (N:1) – many entities are associated with only one attribute of another entity Many-to-many (M: N) – multiple entities can be associated with multiple attributes of another entity ER Design Issues Here, we will discuss the basic design issues of an ER database schema in the following points: 1) Use of Entity Set vs Attributes The use of an entity set or attribute depends on the structure of the real-world enterprise that is being modeled and the semantics associated with its attributes. 2) Use of Entity Set vs. Relationship Sets It is difficult to examine if an object can be best expressed by an entity set or relationship set. 3) Use of Binary vs n-ary Relationship Sets Generally, the relationships described in the databases are binary relationships. However, non-binary relationships can be represented by several binary relationships. Transforming Entities and Attributes to Relations Our ultimate aim is to transform the ER design into a set of definitions for relational tables in a computerized database, which we do through a set of transformation rules.
  • 63.
  • 64.
    Database Systems Handbook BY:MUHAMMAD SHARIF 64 The first step is to design a rough schema by analyzing of requirements Normalize the ERD and remove FD from Entities to enter the final steps
  • 65.
    Database Systems Handbook BY:MUHAMMAD SHARIF 65 Transformation Rule 1. Each entity in an ER diagram is mapped to a single table in a relational database; Transformation Rule 2. A key attribute of the entity type is represented by the primary key. All single-valued attribute becomes a column for the table Transformation Rule 3. Given an entity E with primary identify, a multivalued attributed attached to E in an ER diagram is mapped to a table of its own; Transforming Binary Relationships to Relations We are now prepared to give the transformation rule for a binary many-to-many relationship. Transformation Rule 3.5. N – N Relationships: When two entities E and F take part in a many-to-many binary relationship R, the relationship is mapped to a representative table T in the related relational
  • 66.
    Database Systems Handbook BY:MUHAMMAD SHARIF 66 database design. The table contains columns for all attributes in the primary keys of both tables transformed from entities E and F, and this set of columns form the primary key for table T. Table T also contains columns for all attributes attached to the relationship. Relationship occurrences are represented by rows of the table, with the related entity instances uniquely identified by their primary key values as rows. Case 1: Binary Relationship with 1:1 cardinality with the total participation of an entity Total participation, i.e. min occur is 1 with double lines in total. A person has 0 or 1 passport number and the Passport is always owned by 1 person. So it is 1:1 cardinality with full participation constraint from Passport. First Convert each entity and relationship to tables. Case 2: Binary Relationship with 1:1 cardinality and partial participation of both entities A male marries 0 or 1 female and vice versa as well. So it is a 1:1 cardinality with partial participation constraint from both. First Convert each entity and relationship to tables. Male table corresponds to Male Entity with key as M-Id. Similarly, the Female table corresponds to Female Entity with the key as F-Id. Marry Table represents the relationship between Male and Female (Which Male marries which female). So it will take attribute M-Id from Male and F-Id from Female. Case 3: Binary Relationship with n: 1 cardinality Case 4: Binary Relationship with m: n cardinality Case 5: Binary Relationship with weak entity In this scenario, an employee can have many dependents and one dependent can depend on one employee. A dependent does not have any existence without an employee (e.g; you as a child can be dependent on your father in his company). So it will be a weak entity and its participation will always be total.
  • 67.
    Database Systems Handbook BY:MUHAMMAD SHARIF 67 EERD design approaches Generalization is the concept that some entities are the subtypes of other more general entities. They are represented by an "is a" relationship. Faculty (ISA OR IS-A OR IS A) subtype of the employee. One method of representing subtype relationships shown below is also known as the top-down approach. Exclusive Subtype If subtypes are exclusive, one supertype relates to at most one subtype. Inclusive Subtype If subtypes are inclusive, one supertype can relate to one or more subtypes
  • 68.
    Database Systems Handbook BY:MUHAMMAD SHARIF 68 Data abstraction in EERD levels Concepts of total and partial, subclasses and superclasses, specializations and generalizations. View level: The highest level of data abstraction like EERD. Middle level: Middle level of data abstraction like ERD The lowest level of data abstraction like Physical/internal data stored at disk/bottom level Specialization Subgrouping into subclasses (top-down approach)( HASA, HAS-A, HAS AN, HAS-AN) Inheritance – Inherit attributes and relationships from the superclass (Name, Birthdate, etc.)
  • 69.
    Database Systems Handbook BY:MUHAMMAD SHARIF 69 Generalization Reverse processes of defining subclasses (bottom-up approach). Bring together common attributes in entities (ISA, IS-A, IS AN, IS-AN) Union Models a class/subclass with more than one superclass of distinct entity types. Attribute inheritance is selective.
  • 70.
    Database Systems Handbook BY:MUHAMMAD SHARIF 70 Constraints on Specialization and Generalization We have four types of specialization/generalization constraints: Disjoint, total Disjoint, partial Overlapping, total Overlapping, partial Multiplicity (relationship constraint) Covering constraints whether the entities in the subclasses collectively include all entities in the superclass Note: Generalization usually is total because the superclass is derived from the subclasses. The term Cardinality has two different meanings based on the context you use.
  • 71.
    Database Systems Handbook BY:MUHAMMAD SHARIF 71 Relationship Constraints types Cardinality ratio Specifies the maximum number of relationship instances in which each entity can participate Types 1:1, 1:N, or M:N Participation constraint Specifies whether the existence of an entity depends on its being related to another entity Types: total and partial Thus the minimum number of relationship instances in which entities can participate: thus1 for total participation, 0 for partial Diagrammatically, use a double line from relationship type to entity type There are two types of participation constraints: Total participation, i.e. min occur is 1 with double lines in total. DottedOval is a derived attribute 1. Partial Participation 2. Total Participation When we require all entities to participate in the relationship (total participation), we use double lines to specify. (Every loan has to have at least one customer)
  • 72.
    Database Systems Handbook BY:MUHAMMAD SHARIF 72 It expresses some entity occurrences associated with one occurrence of the related entity=>The specific. The cardinality of a relationship is the number of instances of entity B that can be associated with entity A. There is a minimum cardinality and a maximum cardinality for each relationship, with an unspecified maximum cardinality being shown as N. Cardinality limits are usually derived from the organization's policies or external constraints. For Example: At the University, each Teacher can teach an unspecified maximum number of subjects as long as his/her weekly hours do not exceed 24 (this is an external constraint set by an industrial award). Teachers may teach 0 subjects if they are involved in non-teaching projects. Therefore, the cardinality limits for TEACHER are (O, N). The University's policies state that each Subject is taught by only one teacher, but it is possible to have Subjects that have not yet been assigned a teacher. Therefore, the cardinality limits for SUBJECT are (0,1). Teacher and subject
  • 73.
    Database Systems Handbook BY:MUHAMMAD SHARIF 73 have M: N relationship connectivity. And they are binary (two) ternary too if we break this relationship. Such situations are modeled using a composite entity (or gerund) Cardinality Constraint: Quantification of the relationship between two concepts or classes (a constraint on aggregation) Remember cardinality is always a relationship to another thing. Max Cardinality(Cardinality) Always 1 or Many. Class A has a relationship to Package B with a cardinality of one, which means at most there can be one occurrence of this class in the package. The opposite could be a Package that has a Max Cardinality of N, which would mean there can be N number of classes Min Cardinality(Optionality) Simply means "required." Its always 0 or 1. 0 would mean 0 or more, 1 or more The three types of cardinality you can define for a relationship are as follows: Minimum Cardinality. Governs whether or not selecting items from this relationship is optional or required. If you set the minimum cardinality to 0, selecting items is optional. If you set the minimum cardinality to greater than 0, the user must select that number of items from the relationship. Optional to Mandatory, Optional to Optional, Mandatory to Optional, Mandatory to Mandatory Summary Of ER Diagram Symbols Maximum Cardinality. Sets the maximum number of items that the user can select from a relationship. If you set the minimum cardinality to greater than 0, you must set the maximum cardinality to a number at least as large If you do not enter a maximum cardinality, the default is 999. Type of Max Cardinality: 1 to 1, 1 to many, many to many, many to 1 Default Cardinality. Specifies what quantity of the default product is automatically added to the initial solution that the user sees. Default cardinality must be equal to or greater than the minimum cardinality and must be less than or equal to the maximum cardinality. Replaces cardinality ratio numerals and single/double line notation Associate a pair of integer numbers (min, max) with each participant of an entity type E in a relationship type R, where 0 ≤ min ≤ max and max ≥ 1 max=N => finite, but unbounded Relationship types can also have attributes Attributes of 1:1 or 1:N relationship types can be migrated to one of the participating entity types For a 1:N relationship type, the relationship attribute can be migrated only to the entity type on the N-side of the relationship Attributes on M: N relationship types must be specified as relationship attributes In the case of Data Modelling, Cardinality defines the number of attributes in one entity set, which can be associated with the number of attributes of other sets via a relationship set. In simple words, it refers to the relationship one table can have with the other table. They can be One-to-one, One-to-many, Many-to-one, or Many-to-many. And third may be the number of tuples in a relation. In the case of SQL, Cardinality refers to a number. It gives the number of unique values that appear in the table for a particular column. For eg: you have a table called Person with the column Gender. Gender column can have values either 'Male' or 'Female''. cardinality is the number of tuples in a relation (number of rows). The Multiplicity of an association indicates how many objects the opposing class of an object can be instantiated. When this number is variable then the. Multiplicity Cardinality + Participation dictionary definition of cardinality is the number of elements in a particular set or other.
  • 74.
    Database Systems Handbook BY:MUHAMMAD SHARIF 74 Multiplicity can be set for attribute operations and associations in a UML class diagram (Equivalent to ERD) and associations in a use case diagram. A cardinality is how many elements are in a set. Thus, a multiplicity tells you the minimum and maximum allowed members of the set. They are not synonymous. Given the example below: 0-1 ---------- 1- Multiplicities: The first multiplicity, for the left entity: 0-1 The second multiplicity, for the right entity: 1- Cardinalities for the first multiplicity: Lower cardinality: 0 Upper cardinality: 1 Cardinalities for the second multiplicity: Lower cardinality: 1 Upper cardinality: Multiplicity is the constraint on the collection of the association objects whereas Cardinality is the count of the objects that are in the collection. The multiplicity is the cardinality constraint. A multiplicity of an event = Participation of an element + cardinality of an element. UML uses the term Multiplicity, whereas Data Modelling uses the term Cardinality. They are for all intents and purposes, the same. Cardinality (sometimes referred to as Ordinality) is what is used in ER modeling to "describe" a relationship between two Entities. Cardinality and Modality The maindifference between cardinality and modality is that cardinality is defined as the metric used to specify the number of occurrences of one object related to the number of occurrences of another object. On the contrary, modality signifies whether a certain data object must participate in the relationship or not. Cardinality refers to the maximum number of times an instance in one entity can be associated with instances in the related entity. Modality refers to the minimum number of times an instance in one entity can be associated with an instance in the related entity. Cardinality can be 1 or Many and the symbol is placed on the outside ends of the relationship line, closest to the entity, Modality can be 1 or 0 and the symbol is placed on the inside, next to the cardinality symbol. For a cardinality of 1, a straight line is drawn. For a cardinality of Many a foot with three toes is drawn. For a modality of 1, a straight line is drawn. For a modality of 0, a circle is drawn. zero or more 1 or more 1 and only 1 (exactly 1) Multiplicity = Cardinality + Participation Cardinality: Denotes the maximum number of possible relationship occurrences in which a certain entity can participate (in simple terms: at most). Note: Connectivity and Modality/ multiplicity/ Cardinality and Relationship are same terms.
  • 75.
    Database Systems Handbook BY:MUHAMMAD SHARIF 75 Participation: Denotes if all or only some entity occurrences participate in a relationship (in simple terms: at least). BASIS FOR COMPARISON CARDINALITY MODALITY Basic A maximum number of associations between the table rows. A minimum number of row associations. Types One-to-one, one-to-many, many-to-many. Nullable and not nullable.
  • 76.
    Database Systems Handbook BY:MUHAMMAD SHARIF 76 Generalization is like a bottom-up approach in which two or more entities of lower levels combine to form a higher level entity if they have some attributes in common. Generalization is more like a subclass and superclass system, but the only difference is the approach. Generalization uses the bottom-up approach. Like subclasses are combined to make a superclass. IS-A, ISA, IS A, IS AN, IS-AN Approach is used in generalization Generalization is the result of taking the union of two or more (lower level) entity types to produce a higher level entity type. Generalization is the same as UNION. Specialization is the same as ISA. A specialization is a top-down approach, and it is the opposite of Generalization. In specialization, one higher-level entity can be broken down into two lower-level entities. Specialization is the result of taking a subset of a higher- level entity type to form a lower-level entity type. Normally, the superclass is defined first, the subclass and its related attributes are defined next, and the relationship set is then added. HASA, HAS-A, HAS AN, HAS-AN.
  • 77.
    Database Systems Handbook BY:MUHAMMAD SHARIF 77 UML to EER specialization or generalization comes in the form of hierarchical entity set:
  • 78.
    Database Systems Handbook BY:MUHAMMAD SHARIF 78 Transforming EERD to Relational Database Model
  • 79.
    Database Systems Handbook BY:MUHAMMAD SHARIF 79 Specialization / Generalization Lattice Example (UNIVERSITY) EERD TO Relational Model
  • 80.
    Database Systems Handbook BY:MUHAMMAD SHARIF 80 Mapping Process 1. Create tables for all higher-level entities. 2. Create tables for lower-level entities. 3. Add primary keys of higher-level entities in the table of lower-level entities. 4. In lower-level tables, add all other attributes of lower-level entities. 5. Declare the primary key of the higher-level table and the primary key of the lower-level table. 6. Declare foreign key constraints. This section presents the concept of entity clustering, which abstracts the ER schema to such a degree that the entire schema can appear on a single sheet of paper or a single computer screen. END
  • 81.
    Database Systems Handbook BY:MUHAMMAD SHARIF 81 CHAPTER 4 DISCOVERING BUSINESS RULES AND DATABASE CONSTRAINTS Overview of Database Constraints Definition of Data integrity Constraints placed on the set of values allowed for the attributes of relation as relational Integrity. Constraints– These are special restrictions on allowable values. For example, the Passing marks for a student must always be greater than 50%. Categories of Constraints Constraints on databases can generally be divided into three main categories: 1. Constraints that are inherent in the data model. We call these inherent model-based constraints or implicit constraints. 2. Constraints that can be directly expressed in schemas of the data model, typically by specifying them in the DDL (data definition language, we call these schema-based constraints or explicit constraints. 3. Constraints that cannot be directly expressed in the schemas of the data model, and hence must be expressed and enforced by the application programs. We call these application-based or semantic constraints or business rules. Types of data integrity 1. Physical Integrity Physical integrity is the process of ensuring the wholeness, correctness, and accuracy of data when data is stored and retrieved. 2. Logical integrity Logical integrity refers to the accuracy and consistency of the data itself. Logical integrity ensures that the data makes sense in its context. Types of logical integrity 1. Entity integrity 2. Domain integrity The model-based constraints or implicit include domain constraints, key constraints, entity integrity constraints, and referential integrity constraints. Domain constraints can be violated if an attribute value is given that does not appear in the corresponding domain or is not of the appropriate data type. Key constraints can be violated if a key value in the new tuple already exists in another tuple in the relation r(R). Entity integrity can be violated if any part of the primary key of the new tuple t is NULL. Referential integrity can be violated if the value of any foreign key in t refers to a tuple that does not exist in the referenced relation. Note: Insertions Constraints and constraints on NULLs are called explicit. Insert can violate any of the four types of constraints discussed in the implicit constraints. 1. Business Rule or default relation constraints These rules are applied to data before (first) the data is inserted into the table columns. For example, Unique, Not NULL, Default constraints. 1. The primary key value can’t be null. 2. Not null (absence of any value (i.e., unknown or nonapplicable to a tuple) 3. Unique 4. Primary key 5. Foreign key 6. Check 7. Default
  • 82.
    Database Systems Handbook BY:MUHAMMAD SHARIF 82 2. Null Constraints Comparisons Involving NULL and Three-Valued Logic: SQL has various rules for dealing with NULL values. Recall from Section 3.1.2 that NULL is used to represent a missing value, but that it usually has one of three different interpretations—value unknown (exists but is not known), value not available (exists but is purposely withheld), or value not applicable (the attribute is undefined for this tuple). Consider the following examples to illustrate each of the meanings of NULL. 1. Unknownalue. A person’s date of birth is not known, so it is represented by NULL in the database. 2. Unavailable or withheld value. A person has a home phone but does not want it to be listed, so it is withheld and represented as NULL in the database. 3. Not applicable attribute. An attribute Last_College_Degree would be NULL for a person who has no college degrees because it does not apply to that person. 3. Enterprise Constraints Enterprise constraints – sometimes referred to as semantic constraints – are additional rules specified by users or database administrators and can be based on multiple tables. Here are some examples. A class can have a maximum of 30 students. A teacher can teach a maximum of four classes per semester. An employee cannot take part in more than five projects. The salary of an employee cannot exceed the salary of the employee’s manager. 4. Key Constraints or Uniqueness Constraints : These are called uniqueness constraints since it ensures that every tuple in the relation should be unique. A relation can have multiple keys or candidate keys(minimal superkey), out of which we choose one of the keys as primary key, we don’t have any restriction on choosing the primary key out of candidate keys, but it is suggested to go with the candidate key with less number of attributes. Null values are not allowed in the primary key, hence Not Null constraint is also a part of key constraint.
  • 83.
    Database Systems Handbook BY:MUHAMMAD SHARIF 83 5. Domain, Field, Row integrity Constraints Domain Integrity: A domain of possible values must be associated with every attribute (for example, integer types, character types, date/time types). Declaring an attribute to be of a particular domain act as the constraint on the values that it can take. Domain Integrity rules govern the values. In the specific field/cell values must be with in column domain and represent a specific location within at table In a database system, the domain integrity is defined by: 1. The datatype and the length 2. The NULL value acceptance 3. The allowable values, through techniques like constraints or rules the default value. Some examples of Domain Level Integrity are mentioned below;  Data Type– For example integer, characters, etc.  Date Format– For example dd/mm/yy or mm/dd/yyyy or yy/mm/dd.  Null support– Indicates whether the attribute can have null values.  Length– Represents the length of characters in a value.  Range– The range specifies the lower and upper boundaries of the values the attribute may legally have. Entity integrity: No attribute of a primary key can be null (every tuple must be uniquely identified) 6. Referential Integrity Constraints A referential integrity constraint is famous as a foreign key constraint. The value of foreign key values is derived from the Primary key of another table. Similar options exist to deal with referential integrity violations caused by Update as those options discussed for the Delete operation. There are two types of referential integrity constraints:  Insert Constraint: We can’t inert value in CHILD Table if the value is not stored in MASTER Table  Delete Constraint: We can’t delete a value from MASTER Table if the value is existing in CHILD Table The three rules that referential integrity enforces are: 1. A foreign key must have a corresponding primary key. (“No orphans” rule.) 2. When a record in a primary table is deleted, all related records referencing the primary key must also be deleted, which is typically accomplished by using cascade delete. 3. If the primary key for record changes, all corresponding records in other tables using the primary key as a foreign key must also be modified. This can be accomplished by using a cascade update. 7. Assertions constraints An assertion is any condition that the database must always satisfy. Domain constraints and Integrity constraints are special forms of assertions.
  • 84.
    Database Systems Handbook BY:MUHAMMAD SHARIF 84 8. Authorization constraints We may want to differentiate among the users as far as the type of access they are permitted to various data values in the database. This differentiation is expressed in terms of Authorization. The most common being: Read authorization – which allows reading but not the modification of data; Insert authorization – which allows the insertion of new data but not the modification of existing data Update authorization – which allows modification, but not deletion.
  • 85.
    Database Systems Handbook BY:MUHAMMAD SHARIF 85 9. Preceding integrity constraints Preceding integrity constraints are included in the data definition language because they occur in most database applications. However, they do not include a large class of general constraints, sometimes called semantic integrity constraints, which may have to be specified and enforced on a relational database. The types of constraints we discussed so far may be called state constraints because they define the constraints that a valid state of the database must satisfy. Another type of constraint, called transition constraints, can be defined to deal with state changes in the database. An example of a transition constraint is: “the salary of an employee can only increase.” What is the use of data constraints? Constraints are used to: Avoid bad data being entered into tables. At the database level, it helps to enforce business logic. Improves database performance. Enforces uniqueness and avoid redundant data to the database. END
  • 86.
    Database Systems Handbook BY:MUHAMMAD SHARIF 86 CHAPTER 5 DATABASE DESIGN STEPS AND IMPLEMENTATIONS SQL version:  1970 – Dr. Edgar F. “Ted” Codd described a relational model for databases.  1974 – Structured Query Language appeared.  1978 – IBM released a product called System/R.  1986 – SQL1 IBM developed the prototype of a relational database, which is standardized by ANSI.  1989- First minor changes but not standards changed  1992 – SQL2 launched with features like triggers, object orientation, etc.  SQL1999 to 2003- SQL3 launched  SQL2006- Support for XML Query Language  SQL2011-improved support for temporal databases  SQL-86 in 1986, the most recent version in 2011 (SQL:2016). SQL-86 The first SQL standard was SQL-86. It was published in 1986 as ANSI standard and in 1987 as International Organization for Standardization (ISO) standard. The starting point for the ISO standard was IBM’s SQL standard implementation. This version of the SQL standard is also known as SQL 1. SQL-89 The next SQL standard was SQL-89, published in 1989. This was a minor revision of the earlier standard, a superset of SQL-86 that replaced SQL-86. The size of the standard did not change. SQL-92 The next revision of the standard was SQL-92 – and it was a major revision. The language introduced by SQL-92 is sometimes referred to as SQL 2. The standard document grew from 120 to 579 pages. However, much of the growth was due to more precise specifications of existing features. The most important new features were:
  • 87.
    Database Systems Handbook BY:MUHAMMAD SHARIF 87 An explicit JOIN syntax and the introduction of outer joins: LEFT JOIN, RIGHT JOIN, FULL JOIN. The introduction of NATURAL JOIN and CROSS JOIN SQL:1999 SQL:1999 (also called SQL 3) was the fourth revision of the SQL standard. Starting with this version, the standard name used a colon instead of a hyphen to be consistent with the names of other ISO standards. This standard was published in multiple installments between 1999 and 2002. In 1993, the ANSI and ISO development committees decided to split future SQL development into a multi-part standard. The first installment of 1995 and SQL:1999 had many parts: Part 1: SQL/Framework (100 pages) defined the fundamental concepts of SQL. Part 2: SQL/Foundation (1050 pages) defined the fundamental syntax and operations of SQL: types, schemas, tables, views, query and update statements, expressions, and so forth. This part is the most important for regular SQL users. Part 3: SQL/CLI (Call Level Interface) (514 pages) defined an application programming interface for SQL. Part 4: SQL/PSM (Persistent Stored Modules) (193 pages) defined extensions that make SQL procedural. Part 5: SQL/Bindings (270 pages) defined methods for embedding SQL statements in application programs written in a standard programming language. SQL/Bindings. The Dynamic SQL and Embedded SQL bindings are taken from SQL-92. No active new work at this time, although C++ and Java interfaces are under discussion. Part 6: SQL/XA. An SQL specialization of the popular XA Interface developed by X/Open (see below). Part 7: SQL/Temporal. A newly approved SQL subproject to develop enhanced facilities for temporal data management using SQL. Part 8: SQL Multimedia (SQL/Mm) A new ISO/IEC international standardization project for the development of an SQL class library for multimedia applications was approved in early 1993. This new standardization activity, named SQL Multimedia (SQL/MM), will specify packages of SQL abstract data type (ADT) definitions using the facilities for ADT specification and invocation provided in the emerging SQL3 specification. SQL:2006 further specified how to use SQL with XML. It was not a revision of the complete SQL standard, just Part 14, which deals with SQL-XML interoperability. The current SQL standard is SQL:2019. It added Part 15, which defines multidimensional array support in SQL. SQL:2003 and beyond In the 21st century, the SQL standard has been regularly updated. The SQL:2003 standard was published on March 1, 2004. Its major addition was window functions, a powerful analytical feature that allows you to compute summary statistics without collapsing rows. Window functions significantly increased the expressive power of SQL. They are extremely useful in preparing all kinds of business reports, analyzing time series data, and analyzing trends. The addition of window functions to the standard coincided with the popularity of OLAP and data warehouses. People started using databases to make data-driven business decisions. This trend is only gaining momentum, thanks to the growing amount of data that all businesses collect. You can learn window functions with our Window Functions course. (Read about the course or why it’s worth learning SQL window functions here.) SQL:2003 also introduced XML-related functions, sequence generators, and identity columns. Conformance with Standard SQL This section declares Oracle's conformance to the SQL standards established by these organizations: 1. American National Standards Institute (ANSI) in 1986.
  • 88.
    Database Systems Handbook BY:MUHAMMAD SHARIF 88 2. International Standards Organization (ISO) in 1987. 3. United States Federal Government Federal Information Processing Standards (FIPS) Standard of SQL ANSI and ISO and FIPS
  • 89.
    Database Systems Handbook BY:MUHAMMAD SHARIF 89 Dynamic SQL or Extended SQL (Extended SQL called SQL3 OR SQL-99) ODBC, however, is a call level interface (CLI) that uses a different approach. Using a CLI, SQL statements are passed to the database management system (DBMS) within a parameter of a runtime API. Because the text of the SQL statement is never known until runtime, the optimization step must be performed each time an SQL statement is run. This approach commonly is referred to as dynamic SQL. The simplest way to execute a dynamic SQL statement is with an EXECUTE IMMEDIATE statement. This statement passes the SQL statement to the DBMS for compilation and execution. Static SQL or Embedded SQL Static or Embedded SQL are SQL statements in an application that do not change at runtime and, therefore, can be hard-coded into the application. This is a central idea of embedded SQL: placing SQL statements in a program written in a host programming language. The embedded SQL shown in Embedded SQL Example is known as static SQL. Traditional SQL interfaces used an embedded SQL approach. SQL statements were placed directly in an application's source code, along with high-level language statements written in C, COBOL, RPG, and other programming languages. The source code then was precompiled, which translated the SQL statements into code that the subsequent compile step could process. This method is referred to as static SQL. One performance advantage to this approach is that SQL statements were optimized at the time the high-level program was compiled, rather than at runtime while the user was waiting. Static SQL statements in the same program are treated normally.
  • 90.
  • 91.
    Database Systems Handbook BY:MUHAMMAD SHARIF 91 Common Table Expressions (CTE) Common table expressions (CTEs) enable you to name subqueries temporarily for a result set. You then refer to these like normal tables elsewhere in your query. This can make your SQL easier to write and understand later. CTEs go in with the clause above the select statement.
  • 92.
    Database Systems Handbook BY:MUHAMMAD SHARIF 92 Recursive common table expression (CTE) RCTE is a CTE that references itself. By doing so, the CTE repeatedly executes, and returns subsets of data, until it returns the complete result set. A recursive CTE is useful in querying hierarchical data such as organization charts where one employee reports to a manager or a multi-level bill of materials when a product consists of many components, and each component itself also consists of many other components. Query-By-Example (QBE) Query-By-Example (QBE) is the first interactive database query language to exploit such modes of HCI. In QBE, a query is constructed on an interactive terminal involving two-dimensional ‘drawings’ of one or more relations, visualized in tabular form, which are filled in selected columns with ‘examples’ of data items to be retrieved (thus the phrase query-by-example). It is different from SQL, and from most other database query languages, in having a graphical user interface that allows users to write queries by creating example tables on the screen. QBE, like SQL, was developed at IBM and QBE is an IBM trademark, but a number of other companies sell QBE-like interfaces, including Paradox. A convenient shorthand notation is that if we want to print all fields in some relation, we can place P. under the name of the relation. This notation is like the SELECT * convention in SQL. It is equivalent to placing a P. in every field:
  • 93.
    Database Systems Handbook BY:MUHAMMAD SHARIF 93 Example of QBE: AND, OR Conditions in QBE
  • 94.
    Database Systems Handbook BY:MUHAMMAD SHARIF 94 Key characteristics of SQL Set-oriented and declarative Free-form language Case insensitive Can be used both interactively from a command prompt or executed by a program Rules to write commands:  Table names cannot exceed 20 characters.  The name of the table must be unique.  Field names also must be unique.  The field list and filed length must be enclosed in parentheses.  The user must specify the field length and type.  The field definitions must be separated with commas.  SQL statements must end with a semicolon.
  • 95.
  • 96.
    Database Systems Handbook BY:MUHAMMAD SHARIF 96 Database Design Phases/Stages
  • 97.
  • 98.
  • 99.
  • 100.
    Database Systems Handbook BY:MUHAMMAD SHARIF 100
  • 101.
    Database Systems Handbook BY:MUHAMMAD SHARIF 101 III. Physical design. The physical design step involves the selection of indexes (access methods), partitioning, and clustering of data. The logical design methodology in step II simplifies the approach to designing large relational databases by reducing the number of data dependencies that need to be analyzed. This is accomplished by inserting conceptual data modeling and integration steps (II(a) and II(b) of pictures into the traditional relational design approach. IV. Database implementation, monitoring, and modification. Once thedesign is completed, and the database can be created through the implementation of the formal schema using the data definition language (DDL) of a DBMS.
  • 102.
    Database Systems Handbook BY:MUHAMMAD SHARIF 102 General Properties of Database Objects Entity Distinct object, Class, Table, Relation Entity Set A collection of similar entities. E.g., all employees. All entities in an entity set have the same set of attributes. Attribute Describes some aspect of the entity/object, characteristics of object. An attribute is a data item that describes a property of an entity or a relationship Column or field The column represents the set of values for a specific attribute. An attribute is for a model and a column is for a table, a column is a column in a database table whereas attribute(s) are externally visible facets of an object. A relation instance is a finite set of tuples in the RDBMS system. Relation instances never have duplicate tuples. Relationship Association between entities, connected entities are called participants, Connectivity describes the relationship (1-1, 1-M, M-N) The degree of a relationship refers to the=> number of entities Following the relation in above image consist degree=4, 5=cardinality, data values/cells = 20.
  • 103.
    Database Systems Handbook BY:MUHAMMAD SHARIF 103 Characteristics of relation 1. Distinct Relation/table name 2. Relations are unordered 3. Cells contain exactly one atomic (Single) value means Each cell (field) must contain a single value 4. No repeating groups 5. Distinct attributes name 6. Value of attribute comes from the same domain 7. Order of attribute has no significant 8. The attributes in R(A1, ...,An) and the values in t = <V1,V2, ..... , Vn> are ordered. 9. Each tuple is a distinct 10. order of tuples that has no significance. 11. tuples may be stored and retrieved in an arbitrary order 12. Tables manage attributes. This means they store information in form of attributes only 13. Tables contain rows. Each row is one record only 14. All rows in a table have the same columns. Columns are also called fields 15. Each field has a data type and a name 16. A relation must contain at least one attribute (column) that identifies each tuple (row) uniquely Database Table type Temporary table Here are RDBMS, which supports temporary tables. Temporary Tables are a great feature that lets you store and process intermediate results by using the same selection, update, and join capabilities of tables. Temporary tables store session-specific data. Only the session that adds the rows can see them. This can be handy to store working data. In ANSI there are two types of temp tables. There are two types of temporary tables in the Oracle Database: global and private. Global Temporary Tables To create a global temporary table add the clause "global temporary" between create and table. For Example: create global temporary table toys_gtt ( toy_name varchar2(100)); The global temp table is accessible to everyone. Global, you create this and it is registered in the data dictionary, it lives "forever". the global pertains to the schema definition Private/Local Temporary Tables Starting in Oracle Database 18c, you can create private temporary tables. These tables are only visible in your session. Other sessions can't see the table! The temporary tables could be very useful in some cases to keep temporary data. Local, it is created "on the fly" and disappears after its use. you never see it in the data dictionary. Details of temp tables: A temporary table is owned by the person who created it and can only be accessed by that user. A global temporary table is accessible to everyone and will contain data specific to the session using it; multiple sessions can use the same global temporary table simultaneously. It is a global definition for a temporary table that all can benefit from. Local temporary table – These tables are invisible when there is a connection and are deleted when it is closed. Clone Table Temporary tables are available in MySQL version 3.23 onwards There may be a situation when you need an exact copy of a table and the CREATE TABLE . or the SELECT. commands do not suit your purposes because the copy must include the same indexes, default values, and so forth.
  • 104.
    Database Systems Handbook BY:MUHAMMAD SHARIF 104 There are Magic Tables (virtual tables) in SQL Server that hold the temporal information of recently inserted and recently deleted data in the virtual table. The INSERTED magic table stores the before version of the row, and the DELETED table stores the after version of the row for any INSERT, UPDATE, or DELETE operations. A record is a collection of data objects that are kept in fields, each having its name and datatype. A Record can be thought of as a variable that can store a table row or a set of columns from a table row. Table columns relate to the fields. External Tables An external table is a read-only table whose metadata is stored in the database but whose data is stored outside the database.
  • 105.
    Database Systems Handbook BY:MUHAMMAD SHARIF 105
  • 106.
    Database Systems Handbook BY:MUHAMMAD SHARIF 106
  • 107.
    Database Systems Handbook BY:MUHAMMAD SHARIF 107 Partitioning Tables and Table Splitting Partitioning logically splits up a table into smaller tables according to the partition column(s). So rows with the same partition key are stored in the same physical location.
  • 108.
    Database Systems Handbook BY:MUHAMMAD SHARIF 108 Data Partitioning horizontal (Table rows) Horizontal partitioning divides a table into multiple tables that contain the same number of columns, but fewer rows. Table partitioning vertically (Table columns) Vertical partitioning splits a table into two or more tables containing different columns.
  • 109.
    Database Systems Handbook BY:MUHAMMAD SHARIF 109
  • 110.
    Database Systems Handbook BY:MUHAMMAD SHARIF 110 Collections Records All items are of the same data type All items are different data types Same data type items are called elements Different data type items are called fields Syntax: variable_name(index) Syntax: variable_name.field_name For creating a collection variable you can use %TYPE For creating a record variable you can use %ROWTYPE or %TYPE Lists and arrays are examples Tables and columns are examples Correlated vs. Uncorrelated SQL Expressions A subquery is correlated when it joins to a table from the parent query. If you don't, then it's uncorrelated. This leads to a difference between IN and EXISTS. EXISTS returns rows from the parent query, as long as the subquery finds at least one row. So the following uncorrelated EXISTS returns all the rows in colors: select from colors where exists ( select null from bricks); Table Organizations Create a table in Oracle Database that has an organization clause. This defines how it physically stores rows in the table. The options for this are: 1. Heap table organization (Some DBMS provide for tables to be created without indexes, and access data randomly) 2. Index table organization or Index Sequential table. 3. Hash table organization (Some DBMS provide an alternative to an index to access data by trees or hashing key or hashing function). By default, tables are heap-organized. This means the database is free to store rows wherever there is space. You can add the "organization heap" clause if you want to be explicit.
  • 111.
    Database Systems Handbook BY:MUHAMMAD SHARIF 111 Big picture of database languages and command types Embedded DML are of two types Low-level or Procedural DMLs: require a user to specify what data are needed and how to get those data. PLSQL, Java, and Relational Algebra are the best examples. It can be used for query optimization. High-level or Declarative DMLs (also referred to as non-procedural DMLs): require a user to specify what data are needed without specifying how to get those data. SQL or Google Search are the best examples. It is not suitable for query optimization. TRC and DRC are declarative languages.
  • 112.
    Database Systems Handbook BY:MUHAMMAD SHARIF 112
  • 113.
    Database Systems Handbook BY:MUHAMMAD SHARIF 113
  • 114.
    Database Systems Handbook BY:MUHAMMAD SHARIF 114
  • 115.
    Database Systems Handbook BY:MUHAMMAD SHARIF 115 Other SQL clauses used during Query evaluation  Windowing Clause When you use order by, the database adds a default windowing clause of range between unbounded preceding and current row.  Sliding Windows As well as running totals so far, you can change the windowing clause to be a subset of the previous rows. The following shows the total weight of: 1. The current row + the previous row 2. All rows with the same weight as the current + all rows with a weight one less than the current Strategies for Schema design in DBMS Top-down strategy – Bottom-up strategy – Inside-Out Strategy – Mixed Strategy – Identifying correspondences and conflicts among the schema integration in DBMS Naming conflict Type conflicts Domain conflicts Conflicts among constraints Process of SQL When we are executing the command of SQL on any Relational database management system, then the system automatically finds the best routine to carry out our request, and the SQL engine determines how to interpret that particular command. Structured Query Language contains the following four components in its process: 1. Query Dispatcher 2. Optimization Engines 3. Classic Query Engine 4. SQL Query Engine, etc. SQL Programming Approaches to Database Programming In this section, we briefly compare the three approaches for database programming and discuss the advantages and disadvantages of each approach. Several techniques exist for including database interactions in application programs. The main approaches for database programming are the following: 1. Embedding database commands in a general-purpose programming language. Embedded SQL Approach. The main advantage of this approach is that the query text is part of the program source code itself, and hence can be checked for syntax errors and validated against the database schema at compile time.
  • 116.
    Database Systems Handbook BY:MUHAMMAD SHARIF 116 2. Using a library of database functions. A library of functions is made available to the host programming language for database calls. Library of Function Calls Approach. This approach provides more flexibility in that queries can be generated at runtime if needed. 3. Designing a brand-new language. A database programming language is designed from scratch to be compatible with the database model and query language. Database Programming Language Approach. This approach does not suffer from the impedance mismatch problem, as the programming language data types are the same as the database data types.
  • 117.
    Database Systems Handbook BY:MUHAMMAD SHARIF 117 Standard SQL order of execution
  • 118.
    Database Systems Handbook BY:MUHAMMAD SHARIF 118
  • 119.
    Database Systems Handbook BY:MUHAMMAD SHARIF 119
  • 120.
    Database Systems Handbook BY:MUHAMMAD SHARIF 120 TYPES OF SUB QUERY (SUBQUERY) Subqueries Types 1. From Subqueries 2. Attribute List Subqueries 3. Inline subquery 4. Correlated Subqueries 5. Where Subqueries 6. IN Subqueries 7. Having Subqueries 8. Multirow Subquery Operators: ANY and ALL Scalar Subqueries Scalar subqueries return one column and at most one row. You can replace a column with a scalar subquery in most cases.
  • 121.
    Database Systems Handbook BY:MUHAMMAD SHARIF 121 We can once again be faced with possible ambiguity among attribute names if attributes of the same name exist— one in a relation in the FROM clause of the outer query, and another in a relation in the FROM clause of the nested query. The rule is that a reference to an unqualified attribute refers to the relation declared in the innermost nested query.
  • 122.
    Database Systems Handbook BY:MUHAMMAD SHARIF 122
  • 123.
    Database Systems Handbook BY:MUHAMMAD SHARIF 123
  • 124.
    Database Systems Handbook BY:MUHAMMAD SHARIF 124 Some important differences in DML statements: Difference between DELETE and TRUNCATE statements There is a slight difference b/w delete and truncate statements. The DELETE statement only deletes the rows from the table based on the condition defined by the WHERE clause or deletes all the rows from the table when the condition is not specified. But it does not free the space contained by the table. The TRUNCATE statement: is used to delete all the rows from the table and free the containing space. Difference b/w DROP and TRUNCATE statements When you use the drop statement it deletes the table's row together with the table's definition so all the relationships of that table with other tables will no longer be valid. When you drop a table Table structure will be dropped Relationships will be dropped Integrity constraints will be dropped Access privileges will also be dropped
  • 125.
    Database Systems Handbook BY:MUHAMMAD SHARIF 125 On the other hand, when we TRUNCATE a table, the table structure remains the same, so you will not face any of the above problems. In general, ANSI SQL permits the use of ON DELETE and ON UPDATE clauses to cover CASCADE, SET NULL, or SET DEFAULT. MS Access, SQL Server, and Oracle support ON DELETE CASCADE. MS Access and SQL Server support ON UPDATE CASCADE. Oracle does not support ON UPDATE CASCADE. Oracle supports SET NULL. MS Access and SQL Server do not support SET NULL. Refer to your product manuals for additional information on referential constraints. While MS Access does not support ON DELETE CASCADE or ON UPDATE CASCADE at the SQL command-line level, Types of Multitable INSERT statements
  • 126.
    Database Systems Handbook BY:MUHAMMAD SHARIF 126 DML before and after processing in triggers Database views and their types: The definition of views is one of the final stages in database design since it relies on the logical schema being finalized. Views are “virtual tables” that are a selection of rows and columns from one or more real tables and can include calculated values in additional virtual columns. A view is a virtual relation or one that does not exist but is dynamically derived it can be constructed by performing operations (i.e., select, project, join, etc.) on values of existing base relation (a named relation that is designed in a conceptual schema whose tuples are physically stored in the database). Views are viewable in the external schema.
  • 127.
    Database Systems Handbook BY:MUHAMMAD SHARIF 127 Types of View 1. User-defined view a. Simple view (Single table view) b. Complex View (Multiple tables having joins, group by, and functions) c. Inline View (Based on a subquery in from clause to create a temp table and form a complex query) d. Materialized View (It stores physical data, definitions of tables) e. Dynamic view f. Static view 2. Database View 3. System Defined Views 4. Information Schema View 5. Catalog View 6. Dynamic Management View 7. Server-scoped Dynamic Management View 8. Sources of Data Dictionary Information View a. General Views b. Transaction Service Views c. SQL Service Views
  • 128.
    Database Systems Handbook BY:MUHAMMAD SHARIF 128 Advantages of View: Provide security Hide specific parts of the database from certain users Customize base relations based on their needs It supports the external model Provide logical independence Views don't store data in a physical location. Views can provide Access Restriction, since data insertion, update, and deletion is not possible with the view. We can DML on view if it is derived from a single base relation, and contains the primary key or a candidate key When can a view be updated? 1. The view is defined based on one and only one table. 2. The view must include the PRIMARY KEY of the table based upon which the view has been created. 3. The view should not have any field made out of aggregate functions. 4. The view must not have any DISTINCT clause in its definition. 5. The view must not have any GROUP BY or HAVING clause in its definition. 6. The view must not have any SUBQUERIES in its definitions. 7. If the view you want to update is based upon another view, the latter should be updatable. 8. Any of the selected output fields (of the view) must not use constants, strings, or value expressions. END
  • 129.
    Database Systems Handbook BY:MUHAMMAD SHARIF 129 CHAPTER 6 DATABASE NORMALIZATION AND DATABASE JOINS Quick Overview of 12 Codd's Rule Every database has tables, and constraints cannot be referred to as a rational database system. And if any database has only a relational data model, it cannot be a Relational Database System (RDBMS). So, some rules define a database to be the correct RDBMS. These rules were developed by Dr. Edgar F. Codd (E.F. Codd) in 1985, who has vast research knowledge on the Relational Model of database Systems. Codd presents his 13 rules for a database to test the concept of DBMS against his relational model, and if a database follows the rule, it is called a true relational database (RDBMS). These 12 rules are popular in RDBMS, known as Codd's 12 rules. Rule 0: The Foundation Rule The database must be in relational form. So that the system can handle the database through its relational capabilities. Rule 1: Information Rule A database contains various information, and this information must be stored in each cell of a table in the form of rows and columns. Rule 2: Guaranteed Access Rule Every single or precise data (atomic value) may be accessed logically from a relational database using the combination of primary key value, table name, and column name. Each attribute of relation has a name. Rule 3: Systematic Treatment of Null Values Nulls must be represented and treated in a systematic way, independent of data type. The null value has various meanings in the database, like missing the data, no value in a cell, inappropriate information, unknown data, and the primary key should not be null. Rule 4: Active/Dynamic Online Catalog based on the relational model It represents the entire logical structure of the descriptive database that must be stored online and is known as a database dictionary. It authorizes users to access the database and implement a similar query language to access the database. Metadata must be stored and managed as ordinary data. Rule 5: Comprehensive Data SubLanguage Rule The relational database supports various languages, and if we want to access the database, the language must be explicit, linear, or well-defined syntax, and character strings and supports the comprehensive: data definition, view definition, data manipulation, integrity constraints, and limit transaction management operations. If the database allows access to the data without any language, it is considered a violation of the database. Rule 6: View Updating Rule All views tables can be theoretically updated and must be practically updated by the database systems. Rule 7: Relational Level Operation (High-Level Insert, Update, and delete) Rule A database system should follow high-level relational operations such as insert, update, and delete in each level or a single row. It also supports the union, intersection, and minus operation in the database system. Rule 8: Physical Data Independence Rule All stored data in a database or an application must be physically independent to access the database. Each data should not depend on other data or an application. If data is updated or the physical structure of the database is changed, it will not show any effect on external applications that are accessing the data from the database. Rule 9: Logical Data Independence Rule It is similar to physical data independence. It means, that if any changes occurred to the logical level (table structures), it should not affect the user's view (application). For example, suppose a table either split into two tables, or two table joins to create a single table, these changes should not be impacted on the user view application.
  • 130.
    Database Systems Handbook BY:MUHAMMAD SHARIF 130 Rule 10: Integrity Independence Rule A database must maintain integrity independence when inserting data into a table's cells using the SQL query language. All entered values should not be changed or rely on any external factor or application to maintain integrity. It is also helpful in making the database independent for each front-end application. Rule 11: Distribution Independence Rule The distribution independence rule represents a database that must work properly, even if it is stored in different locations and used by different end-users. Suppose a user accesses the database through an application; in that case, they should not be aware that another user uses particular data, and the data they always get is only located on one site. The end users can access the database, and these access data should be independent for every user to perform the SQL queries. Rule 12: Non-Subversion Rule The non-submersion rule defines RDBMS as a SQL language to store and manipulate the data in the database. If a system has a low-level or separate language other than SQL to access the database system, it should not subvert or bypass integrity to transform data. Normalizations Ans It is a refinement technique, it reduces redundancy and eliminates undesirable’s characteristics like insertion, updating, and deletions. Removal of anomalies and reputations. That normalization and E-R modeling are used concurrently to produce a good database design. Advantages of normalization Reduces data redundancies Expending entities Helps eliminate data anomalies Produces controlled redundancies to link tables Cost more processing efforts Series steps called normal forms
  • 131.
    Database Systems Handbook BY:MUHAMMAD SHARIF 131 Anomalies of a bad database design The table displays data redundancies which yield the following anomalies 1. Update anomalies Changing the price of product ID 4 requires an update in several records. If data items are scattered and are not linked to each other properly, then it could lead to strange situations. 2. Insertion anomalies The new employee must be assigned a project (phantom project). We tried to insert data in a record that does not exist at all. 3. Deletion anomalies If an employee is deleted, other vital data is lost. We tried to delete a record, but parts of it were left undeleted because of unawareness, the data is also saved somewhere else. if we delete the Dining Table from Order 1006, we lose information concerning this item's finish and price Anomalies type w.r.t Database table constraints
  • 132.
    Database Systems Handbook BY:MUHAMMAD SHARIF 132 In most cases, if you can place your relations in the third normal form (3NF), then you will have avoided most of the problems common to bad relational designs. Boyce-Codd (BCNF) and the fourth normal form (4NF) handle special situations that arise only occasionally.  1st Normal form: Normally every table before normalization has repeating groups In the first normal for conversion we do eliminate Repeating groups in table records Proper primary key developed/All attributes depends on the primary key. Uniquely identifies attribute values (rows) (Fields) Dependencies can be identified, No multivalued attributes Every attribute value is atomic A functional dependency exists when the value of one thing is fully determined by another. For example, given the relation EMP(empNo, emp name, sal), attribute empName is functionally dependent on attribute empNo. If we know empNo, we also know the empName. Types of dependencies Partial (Based on part of composite primary key) Transitive (One non-prime attribute depends on another nonprime attribute)
  • 133.
    Database Systems Handbook BY:MUHAMMAD SHARIF 133 PROJ_NUM,EMP_NUM  PROJ_NAME, EMP_NAME, JOB_CLASS,CHG_HOUR, HOURS  2nd Normal form: Start with the 1NF format: Write each key component on a separate line Partial dependency has been ended by separating the table with its original key as a new table. Keys with their respective attributes would be a new table. Still possible to exhibit transitive dependency A relation will be in 2NF if it is in 1NF and all non-key attributes are fully functional and dependent on the primary key. No partial dependency should exist in the relation  3rd Normal form: Create a separate table(s) to eliminate transitive functional dependencies 2NF PLUS no transitive dependencies (functional dependencies on non-primary-key attributes) In 3NF no transitive functional dependency exists for non-prime attributes in a relation. It will be when a non-key attribute is dependent on a non-key attribute or a functional dependency exists between non-key attributes.  Boyce-Codd Normal Form (BCNF) 3NF table with one candidate key is already in BCNF It contains a fully functional dependency Every determinant in the table is a candidate key. BCNF is the advanced version of 3NF. It is stricter than 3NF. A table is in BCNF if every functional dependency X → Y, X is the super key of the table. For BCNF, the table should be in 3NF, and for every FD, LHS is super key.  4th Fourth normal form (4NF) A relation will be in 4NF if it is in Boyce Codd's normal form and has no multi-valued dependency. For a dependency A → B, if for a single value of A, multiple values of B exist, then the relationship will be a multi- valued dependency.  5th Fifth normal form (5NF) A relation is in 5NF if it is in 4NF and does not contain any join dependency and joining should be lossless. 5NF is satisfied when all the tables are broken into as many tables as possible to avoid redundancy. 5NF is also known as Project-join normal form (PJ/NF).
  • 134.
    Database Systems Handbook BY:MUHAMMAD SHARIF 134 Denormalization in Databases Denormalization is a database optimization technique in which we add redundant data to one or more tables. This can help us avoid costly joins in a relational database. Note that denormalization does not mean not doing normalization. It is an optimization technique that is applied after normalization. Types of Denormalization The two most common types of denormalization are two entities in a one-to-one relationship and two entities in a one-to-many relationship. Pros of Denormalization: -
  • 135.
    Database Systems Handbook BY:MUHAMMAD SHARIF 135 Retrieving data is faster since we do fewer joins Queries to retrieve can be simpler (and therefore less likely to have bugs), since we need to look at fewer tables. Cons of Denormalization: - Updates and inserts are more expensive. Denormalization can make an update and insert code harder to write. Data may be inconsistent. Which is the “correct” value for a piece of data? Data redundancy necessities more storage. Relational Decomposition Decomposition is used to eliminate some of the problems of bad design like anomalies, inconsistencies, and redundancy. When a relation in the relational model is not inappropriate normal form then the decomposition of a relationship is required. In a database, it breaks the table into multiple tables. Types of Decomposition 1 Lossless Decomposition If the information is not lost from the relation that is decomposed, then the decomposition will be lossless. The process of normalization depends on being able to factor or decompose a table into two or smaller tables, in such a way that we can recapture the precise content of the original table by joining the decomposed parts. 2 Lossy Decomposition Data will be lost for more decomposition of the table. Database SQL Joins Join is a combination of a Cartesian product followed by a selection process. Database join types:  Non-ANSI Format Join 1. Non-Equi join 2. Self-join 3. Equi Join / equvi join  ANSI format join 1. Semi Join 2. Left/right semi join 3. Anti Semi join 4. Bloom Join 5. Natural Join(Inner join, self join, theta join, cross join/cartesian product, conditional join) 6. Inner join (Equi and theta join/self-join) 7. Theta (θ) 8. Cross join 9. Cross products
  • 136.
    Database Systems Handbook BY:MUHAMMAD SHARIF 136 10. Multi-join operation 11. Outer o Left outer join o Right outer join o Full outer join  Several different algorithms can be used to implement joins (natural, condition-join) 1. Nested Loops join o Simple nested loop join o Block nested loop join o Index nested loop join 2. Sort merge join/external sort join 3. Hash join
  • 137.
    Database Systems Handbook BY:MUHAMMAD SHARIF 137
  • 138.
    Database Systems Handbook BY:MUHAMMAD SHARIF 138 END
  • 139.
    Database Systems Handbook BY:MUHAMMAD SHARIF 139 CHAPTER 7 FUNCTIONAL DEPENDENCIES IN THE DATABASE MANAGEMENT SYSTEM SQL Server records two types of dependency: Functional Dependency Functional dependency (FD) is a set of constraints between two attributes in a relation. Functional dependency says that if two tuples have the same values for attributes A1, A2,..., An, then those two tuples must have to have same values for attributes B1, B2, ..., Bn. Functional dependency is represented by an arrow sign (→) that is, X→Y, where X functionally determines Y. The left-hand side attributes determine the values of attributes on the right-hand side. Types of schema dependency Schema-bound and non-schema-bound dependencies. A Schema-bound dependencies are those dependencies that prevent the referenced object from being altered or dropped without first removing the dependency. An example of a schema-bound reference would be a view created on a table using the WITH SCHEMABINDING option. A Non-schema-bound dependency: does not prevent the referenced object from being altered or dropped. An example of this is a stored procedure that selects from a table. The table can be dropped without first dropping the stored procedure or removing the reference to the table from that stored procedure. Consider the following. Functional Dependency (FD) is a constraint that determines the relation of one attribute to another attribute. Functional dependency is denoted by an arrow “→”. The functional dependency of X on Y is represented by X → Y. In this example, if we know the value of the Employee number, we can obtain Employee Name, city, salary, etc. By this, we can say that the city, Employee Name, and salary are functionally dependent on the Employee number. Key Terms for Functional Dependency in Database Description Axiom Axioms are a set of inference rules used to infer all the functional dependencies on a relational database. Decomposition It is a rule that suggests if you have a table that appears to contain two entities that are determined by the same primary key then you should consider breaking them up into two different tables. Dependent It is displayed on the right side of the functional dependency diagram. Determinant It is displayed on the left side of the functional dependency Diagram. Union It suggests that if two tables are separate, and the PK is the same, you should consider putting them. Together Armstrong’s Axioms The inclusion rule is one rule of implication by which FDs can be generated that are guaranteed to hold for all possible tables. It turns out that from a small set of basic rules of implication, we can derive all others. We list here three basic rules that we call Armstrong’s Axioms Armstrong’s Axioms property was developed by William Armstrong in 1974 to reason about functional dependencies. The property suggests rules that hold true if the following are satisfied: 1. Transitivity If A->B and B->C, then A->C i.e. a transitive relation. 2. Reflexivity A-> B, if B is a subset of A.
  • 140.
    Database Systems Handbook BY:MUHAMMAD SHARIF 140 3. Augmentation -> The last rule suggests: AC->BC, if A->B Inference Rule (IR) Armstrong's axioms are the basic inference rule. Armstrong's axioms are used to conclude functional dependencies on a relational database. The inference rule is a type of assertion. It can apply to a set of FD (functional dependency) to derive other FD. The Functional dependency has 6 types of inference rules: 1. Reflexive Rule (IR1) 2. Augmentation Rule (IR2) 3. Transitive Rule (IR3) 4. Union Rule (IR4) 5. Decomposition Rule (IR5) 6. Pseudo transitive Rule (IR6) Functional Dependency type Dependencies in DBMS are a relation between two or more attributes. It has the following types in DBMS Functional Dependency
  • 141.
    Database Systems Handbook BY:MUHAMMAD SHARIF 141 If the information stored in a table can uniquely determine another information in the same table, then it is called Functional Dependency. Consider it as an association between two attributes of the same relation. Major type are Trivial, non-trival, complete functional, multivalued, transitive dependency Partial Dependency Partial Dependency occurs when a nonprime attribute is functionally dependent on part of a candidate key. Multivalued Dependency When the existence of one or more rows in a table implies one or more other rows in the same table, then the Multi-valued dependencies occur. Multivalued dependency occurs when two attributes in a table are independent of each other but, both depend on a third attribute. A multivalued dependency consists of at least two attributes that are dependent on a third attribute that's why it always requires at least three attributes. Join Dependency Join decomposition is a further generalization of Multivalued dependencies. If the join of R1 and R2 over C is equal to relation R, then we can say that a join dependency (JD) exists. Inclusion Dependency Multivalued dependency and join dependency can be used to guide database design although they both are less common than functional dependencies. The inclusion dependency is a statement in which some columns of a relation are contained in other columns. Transitive Dependency When an indirect relationship causes functional dependency it is called Transitive Dependency. Fully-functionally Dependency An attribute is fully functional dependent on another attribute if it is Functionally Dependent on that attribute and not on any of its proper subset Trivial functional dependency A → B has trivial functional dependency if B is a subset of A. The following dependencies are also trivial: A → A, B → B { DeptId, DeptName } -> Dept Id Non-trivial functional dependency A → B has a non-trivial functional dependency if B is not a subset of A. Trivial − If a functional dependency (FD) X → Y holds, where Y is a subset of X, then it is called a trivial FD. It occurs when B is not a subset of A in − A ->B DeptId -> DeptName Non-trivial − If an FD X → Y holds, where Y is not a subset of X, then it is called a non-trivial FD. Completely non-trivial − If an FD X → Y holds, where x intersects Y = Φ, it is said to be a completely non-trivial FD. When A intersection B is NULL, then A → B is called a complete non-trivial. A ->B Intersaction is empty. Multivalued Dependency and its types 1. Join Dependency 2. Join decomposition is a further generalization of Multivalued dependencies. 3. Inclusion Dependency Example of Dependency diagrams and flow
  • 142.
    Database Systems Handbook BY:MUHAMMAD SHARIF 142 Dependency Preserving If a relation R is decomposed into relations R1 and R2, then the dependencies of R either must be a part of R1 or R2 or must be derivable from the combination of functional dependencies of R1 and R2. For example, suppose there is a relation R (A, B, C, D) with a functional dependency set (A->BC). The relational R is decomposed into R1(ABC) and R2(AD) which is dependency preserving because FD A->BC is a part of relation R1(ABC) Find the canonical cover? Solution: Given FD = { B → A, AD → BC, C → ABD }, now decompose the FD using decomposition rule( Armstrong Axiom ). B → A AD → B ( using decomposition inference rule on AD → BC) AD → C ( using decomposition inference rule on AD → BC) C → A ( using decomposition inference rule on C → ABD) C → B ( using decomposition inference rule on C → ABD) C → D ( using decomposition inference rule on C → ABD) Now set of FD = { B → A, AD → B, AD → C, C → A, C → B, C → D } Canonical Cover/ irreducible A canonical cover or irreducible set of functional dependencies FD is a simplified set of FD that has a similar closure as the original set FD. Extraneous attributes An attribute of an FD is said to be extraneous if we can remove it without changing the closure of the set of FD. Closure Of Functional Dependency The Closure Of Functional Dependency means the complete set of all possible attributes that can be functionally derived from given functional dependency using the inference rules known as Armstrong’s Rules. If “F” is a functional dependency then closure of functional dependency can be denoted using “{F}+”. There are three steps to calculate closure of functional dependency. These are: Step-1 : Add the attributes which are present on Left Hand Side in the original functional dependency. Step-2 : Now, add the attributes present on the Right Hand Side of the functional dependency. Step-3 : With the help of attributes present on Right Hand Side, check the other attributes that can be derived from the other given functional dependencies. Repeat this process until all the possible attributes which can be derived are added in the closure.
  • 143.
    Database Systems Handbook BY:MUHAMMAD SHARIF 143
  • 144.
    Database Systems Handbook BY:MUHAMMAD SHARIF 144
  • 145.
    Database Systems Handbook BY:MUHAMMAD SHARIF 145
  • 146.
    Database Systems Handbook BY:MUHAMMAD SHARIF 146
  • 147.
    Database Systems Handbook BY:MUHAMMAD SHARIF 147 CHAPTER 8 DATABASE TRANSACTION, SCHEDULES, AND DEADLOCKS Overview: Transaction A Transaction is an atomic sequence of actions in the Database (reads and writes, commit, and abort) Each Transaction must be executed completely and must leave the Database in a consistent state. The transaction is a set of logically related operations. It contains a group of tasks. A transaction is an action or series of actions. It is performed by a single user to perform operations for accessing the contents of the database. A transaction can be defined as a group of tasks. A single task is the minimum processing unit which cannot be divided further. ACID Data concurrency means that many users can access data at the same time. Data consistency means that each user sees a consistent view of the data, including visible changes made by the user's transactions and transactions of other users. The ACID model provides a consistent system for Relational databases. The BASE model provides high availability for Non-relational databases like NoSQL MongoDB Techniques for achieving ACID properties Write-ahead logging and checkpointing Serializability and two-phase locking Some important points: Property Responsibility for maintaining Transactions: Atomicity Transaction Manager (Data remains atomic, executed completely, or should not be executed at all, the operation should not break in between or execute partially. Either all R(A) and W(A) are done or none is done) Consistency Application programmer / Application logic checks/ it related to rollbacks Isolation Concurrency Control Manager/Handle concurrency Durability Recovery Manager (Algorithms for Recovery and Isolation Exploiting Semantics (aries) Handle failures, Logging, and recovery (A, D) Concurrency control, rollback, application programmer (C, I) Consistency: The word consistency means that the value should remain preserved always, the database remains consistent before and after the transaction. Isolation and levels of isolation: The term 'isolation' means separation. Any changes that occur in any particular transaction will not be seen by other transactions until the change is not committed in the memory. A transaction isolation level is defined by the following phenomena: Concurrency Control Problems and isolation levels are the same The Three Bad Transaction Dependencies. Locks are often used to prevent these dependencies
  • 148.
    Database Systems Handbook BY:MUHAMMAD SHARIF 148 The five concurrency problems that can occur in the database are: 1. Temporary Update Problem 2. Incorrect Summary Problem 3. Lost Update Problem 4. Unrepeatable Read Problem 5. Phantom Read Problem Dirty Read – A Dirty read is a situation when a transaction reads data that has not yet been committed. For example, Let’s say transaction 1 updates a row and leaves it uncommitted, meanwhile, Transaction 2 reads the updated row. If transaction 1 rolls back the change, transaction 2 will have read data that is considered never to have existed. (Dirty Read Problems (W-R Conflict)) Lost Updates occur when multiple transactions select the same row and update the row based on the value selected (Lost Update Problems (W - W Conflict)) Non Repeatable read – Non Repeatable read occurs when a transaction reads the same row twice and gets a different value each time. For example, suppose transaction T1 reads data. Due to concurrency, another transaction T2 updates the same data and commits, Now if transaction T1 rereads the same data, it will retrieve a different value. (Unrepeatable Read Problem (W-R Conflict)) Phantom Read – Phantom Read occurs when two same queries are executed, but the rows retrieved by the two, are different. For example, suppose transaction T1 retrieves a set of rows that satisfy some search criteria. Now, Transaction T2 generates some new rows that match the search criteria for transaction T1. If transaction T1 re- executes the statement that reads the rows, it gets a different set of rows this time. Based on these phenomena, the SQL standard defines four isolation levels : Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one transaction may read not yet committed changes made by another transaction, thereby allowing dirty reads. At this level, transactions are not isolated from each other. Read Committed – This isolation level guarantees that any data read is committed at the moment it is read. Thus it does not allows dirty reading. The transaction holds a read or write lock on the current row, and thus prevents other transactions from reading, updating, or deleting it. Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks on all rows it references and writes locks on all rows it inserts, updates, or deletes. Since other transactions cannot read, update or delete these rows, consequently it avoids non-repeatable read. Serializable – This is the highest isolation level. A serializable execution is guaranteed to be serializable. Serializable execution is defined to be an execution of operations in which concurrently executing transactions appear to be serially executing.
  • 149.
    Database Systems Handbook BY:MUHAMMAD SHARIF 149 Durability: Durability ensures the permanency of something. In DBMS, the term durability ensures that the data after the successful execution of the operation becomes permanent in the database. If a transaction is committed, it will remain even error, power loss, etc. ACID Example:
  • 150.
    Database Systems Handbook BY:MUHAMMAD SHARIF 150 States of Transaction Begin, active, partially committed, failed, committed, end, aborted Aborted details are necessary If any of the checks fail and the transaction has reached a failed state then the database recovery system will make sure that the database is in its previous consistent state. If not then it will abort or roll back the transaction to bring the database into a consistent state. If the transaction fails in the middle of the transaction then before executing the transaction, all the executed transactions are rolled back to their consistent state. After aborting the transaction, the database recovery module will select one of the two operations: 1) Re-start the transaction 2) Kill the transaction
  • 151.
    Database Systems Handbook BY:MUHAMMAD SHARIF 151 The concurrency control protocols ensure the atomicity, consistency, isolation, durability and serializability of the concurrent execution of the database transactions. Therefore, these protocols are categorized as: 1. Lock Based Concurrency Control Protocol 2. Time Stamp Concurrency Control Protocol 3. Validation Based Concurrency Control Protocol The scheduler A module that schedules the transaction’s actions, ensuring serializability Two main approaches 1. Pessimistic: locks 2. Optimistic: time stamps, MV, validation Scheduling A schedule is responsible for maintaining jobs/transactions if many jobs are entered at the same time(by multiple users) to execute state and read/write operations performed at that jobs. A schedule is a sequence of interleaved actions from all transactions. Execution of several Facts while preserving the order of R(A) and W(A) of any 1 Xact. Note: Two schedules are equivalent if: Two Schedules are equivalent if they have the same dependencies. They contain the same transactions and operations They order all conflicting operations of non-aborting transactions in the same way A schedule is serializable if it is equivalent to a serial schedule Process Scheduling handles the selection of a process for the processor on the basis of a scheduling algorithm and also the removal of a process from the processor. It is an important part of multiprogramming in operating system. Process scheduling involves short-term scheduling, medium-term scheduling and long-term scheduling.
  • 152.
    Database Systems Handbook BY:MUHAMMAD SHARIF 152 The major differences between long term, medium term and short term scheduler are as follows – Long term scheduler Medium term scheduler Short term scheduler Long term scheduler is a job scheduler. Medium term is a process of swapping schedulers. Short term scheduler is called a CPU scheduler.
  • 153.
    Database Systems Handbook BY:MUHAMMAD SHARIF 153 Serial Schedule The serial schedule is a type of schedule where one transaction is executed completely before starting another transaction. Example of Serial Schedule The speed of long term is lesser than the short term. The speed of medium term is in between short and long term scheduler. The speed of short term is fastest among the other two. Long term controls the degree of multiprogramming. Medium term reduces the degree of multiprogramming. The short term provides lesser control over the degree of multiprogramming. The long term is almost nil or minimal in the time sharing system. The medium term is a part of the time sharing system. Short term is also a minimal time sharing system. The long term selects the processes from the pool and loads them into memory for execution. Medium term can reintroduce the process into memory and execution can be continued. Short term selects those processes that are ready to execute.
  • 154.
    Database Systems Handbook BY:MUHAMMAD SHARIF 154 Non-Serial Schedule and its types: If interleaving of operations is allowed, then there will be a non-serial schedule. Serializable schedule Serializability is a guarantee about transactions over one or more objects Doesn’t impose real-time constraints The schedule is serializable if the precedence graph is acyclic The serializability of schedules is used to find non-serial schedules that allow the transaction to execute concurrently without interfering with one another. Example of Serializable A serializable schedule always leaves the database in a consistent state. A serial schedule is always a serializable schedule because, in a serial schedule, a transaction only starts when the other transaction finished execution. However, a non-serial schedule needs to be checked for Serializability. A non-serial schedule of n number of transactions is said to be a serializable schedule if it is equivalent to the serial schedule of those n transactions. A serial schedule doesn’t allow concurrency, only one transaction executes at a time, and the other stars when the already running transaction is finished.
  • 155.
    Database Systems Handbook BY:MUHAMMAD SHARIF 155 Linearizability: a guarantee about single operations on single objects Once the write completes, all later reads (by wall clock) should reflect that write. Types of Serializability There are two types of Serializability. 1. Conflict Serializability 2. View Serializability Conflict Serializable A schedule is conflict serializable if it is equivalent to some serial schedule Non-conflicting operations can be reordered to get a serial schedule. In general, a schedule is conflict-serializable if and only if its precedence graph is acyclic A precedence graph is used for Testing for Conflict-Serializability View serializability/view equivalence is a concept that is used to compute whether schedules are View- Serializable or not. A schedule is said to be View-Serializable if it is view equivalent to a Serial Schedule (where no interleaving of transactions is possible).
  • 156.
    Database Systems Handbook BY:MUHAMMAD SHARIF 156 Note: A schedule is view serializable if it is view equivalent to a serial schedule If a schedule is conflict serializable, then it is also viewed as serializable but not vice versa Non Serializable Schedule
  • 157.
    Database Systems Handbook BY:MUHAMMAD SHARIF 157 The non-serializable schedule is divided into two types, Recoverable and Non-recoverable Schedules. 1. Recoverable Schedule(Cascading Schedule, cascades Schedule, strict Schedule). In a recoverable schedule, if a transaction T commits, then any other transaction that T read from must also have committed. A schedule is recoverable if: It is conflict-serializable, and Whenever a transaction T commits, all transactions that have written elements read by T have already been committed. Example of Recoverable Schedule 2. Non-Recoverable Schedule The relation between various types of schedules can be depicted as:
  • 158.
    Database Systems Handbook BY:MUHAMMAD SHARIF 158 It can be seen that: 1. Cascadeless schedules are stricter than recoverable schedules or are a subset of recoverable schedules. 2. Strict schedules are stricter than cascade-less schedules or are a subset of cascade-less schedules. 3. Serial schedules satisfy constraints of all recoverable, cascadeless, and strict schedules and hence is a subset of strict schedules. Note: Linearizability + serializability = strict serializability Transaction behavior equivalent to some serial execution And that serial execution agrees with real-time Serializability Theorems Wormhole Theorem: A history is isolated if, and only if, it has no wormhole transactions. Locking Theorem: If all transactions are well-formed and two-phase, then any legal history will be isolated. Locking Theorem (converse): If a transaction is not well-formed or is not two-phase, then it is possible to write another transaction, such that the resulting pair is a wormhole. Rollback Theorem: An update transaction that does an UNLOCK and then a ROLLBACK is not two-phase. Thomas Write Rule provides the guarantee of serializability order for the protocol. It improves the Basic Timestamp Ordering Algorithm. The basic Thomas writing rules are as follows: If TS(T) < R_TS(X) then transaction T is aborted and rolled back, and the operation is rejected. If TS(T) < W_TS(X) then don't execute the W_item(X) operation of the transaction and continue processing. Different Types of reading Write Conflict in DBMS As I mentioned earlier, the read operation is safe as it does modify any information. So, there is no Read-Read (RR) conflict in the database. So, there are three types of conflict in the database transaction. Problem 1: Reading Uncommitted Data (WR Conflicts) Reading the value of an uncommitted object might yield an inconsistency Dirty Reads or Write-then-Read (WR) Conflicts. Problem 2: Unrepeatable Reads (RW Conflicts) Reading the same object twice might yield an inconsistency Read-then-Write (RW) Conflicts (Write-After-Read) Problem 3: Overwriting Uncommitted Data (WW Conflicts) Overwriting an uncommitted object might yield an inconsistency What is Write-Read (WR) conflict? This conflict occurs when a transaction read the data which is written by the other transaction before committing. What is Read-Write (RW) conflict? Transaction T2 is Writing data that is previously read by transaction T1.
  • 159.
    Database Systems Handbook BY:MUHAMMAD SHARIF 159 Here if you look at the diagram above, data read by transaction T1 before and after T2 commits is different. What is Write-Write (WW) conflict? Here Transaction T2 is writing data that is already written by other transaction T1. T2 overwrites the data written by T1. It is also called a blind write operation. Data written by T1 has vanished. So it is data update loss. Phase Commit (PC) One-phase commit The Single Phase Commit protocol is more efficient at run time because all updates are done without any explicit coordination. BEGIN INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY) VALUES (1, 'Ramesh', 32, 'Ahmedabad', 2000.00 ); INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY) VALUES (2, 'Khilan', 25, 'Delhi', 1500.00 ); COMMIT; Two-Phase Commit (2PC) The most commonly used atomic commit protocol is a two-phase commit. You may notice that is very similar to the protocol that we used for total order multicast. Whereas the multicast protocol used a two-phase approach to allow the coordinator to select a commit time based on information from the participants, a two-phase commit lets the coordinator select whether or not a transaction will be committed or aborted based on information from the participants. Three-phase Commit Another real-world atomic commit protocol is a three-phase commit (3PC). This protocol can reduce the amount of blocking and provide for more flexible recovery in the event of failure. Although it is a better choice in unusually failure-prone environments, its complexity makes 2PC the more popular choice. Transaction atomicity using a two-phase commit Transaction serializability using distributed locking. DBMS Deadlock Types or techniques
  • 160.
    Database Systems Handbook BY:MUHAMMAD SHARIF 160 All lock requests are made to the concurrency-control manager. Transactions proceed only once the lock request is granted. A lock is a variable, associated with the data item, which controls the access of that data item. Locking is the most widely used form of concurrency control. Deadlock Example:
  • 161.
    Database Systems Handbook BY:MUHAMMAD SHARIF 161 Lock modes and types
  • 162.
    Database Systems Handbook BY:MUHAMMAD SHARIF 162 1. Binary Locks: A Binary lock on a data item can either be locked or unlocked states. 2. Shared/exclusive: This type of locking mechanism separates the locks in DBMS based on their uses. If a lock is acquired on a data item to perform a write operation, it is called an exclusive lock. 3. Simplistic Lock Protocol: This type of lock-based protocol allows transactions to obtain a lock on every object before beginning operation. Transactions may unlock the data item after finishing the ‘write’ operation. 4. Pre-claiming Locking: Two-Phase locking protocol which is also known as a 2PL protocol needs a transaction should acquire a lock after it releases one of its locks. It has 2 phases growing and shrinking. 5. Shared lock: These locks are referred to as read locks, and denoted by 'S'. If a transaction T has obtained Shared-lock on data item X, then T can read X, but cannot write X. Multiple Shared locks can be placed simultaneously on a data item. A deadlock is an unwanted situation in which two or more transactions are waiting indefinitely for one another to give up locks. Four necessary conditions for deadlock Mutual exclusion -- only one process at a time can use the resource
  • 163.
    Database Systems Handbook BY:MUHAMMAD SHARIF 163 Hold and wait -- there must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes. No preemption -- resources cannot be preempted; a resource can be released only voluntarily by the process holding it. Circular wait – one waits for others, others wait for one. The Bakery algorithm is one of the simplest known solutions to the mutual exclusion problem for the general case of the N process. The bakery Algorithm is a critical section solution for N processes. The algorithm preserves the first come first serve the property. Before entering its critical section, the process receives a number. The holder of the smallest number enters the critical section. Deadlock detection This technique allows deadlock to occur, but then, it detects it and solves it. Here, a database is periodically checked for deadlocks. If a deadlock is detected, one of the transactions, involved in the deadlock cycle, is aborted. Other transactions continue their execution. An aborted transaction is rolled back and restarted. When a transaction waits more than a specific amount of time to obtain a lock (called the deadlock timeout), Derby can detect whether the transaction is involved in a deadlock. If deadlocks occur frequently in your multi-user system with a particular application, you might need to do some debugging. A deadlock where two transactions are waiting for one another to give up locks. Deadlock detection and removal schemes Wait-for-graph This scheme allows the older transaction to wait but kills the younger one.
  • 164.
    Database Systems Handbook BY:MUHAMMAD SHARIF 164 Phantom deadlock detection is the condition where the deadlock does not exist but due to a delay in propagating local information, deadlock detection algorithms identify the locks that have been already acquired. There are three alternatives for deadlock detection in a distributed system, namely. Centralized Deadlock Detector − One site is designated as the central deadlock detector. Hierarchical Deadlock Detector − Some deadlock detectors are arranged in a hierarchy. Distributed Deadlock Detector − All the sites participate in detecting deadlocks and removing them. The deadlock detection algorithm uses 3 data structures – Available Vector of length m Indicates the number of available resources of each type. Allocation Matrix of size n*m A[i,j] indicates the number of j the resource type allocated to I the process. Request Matrix of size n*m Indicates the request of each process. Request[i,j] tells the number of instances Pi process is the request of jth resource type. Deadlock Avoidance Deadlock avoidance Acquire locks in a pre-defined order Acquire all locks at once before starting transactions Aborting a transaction is not always a practical approach. Instead, deadlock avoidance mechanisms can be used to detect any deadlock situation in advance. The deadlock prevention technique avoids the conditions that lead to deadlocking. It requires that every transaction lock all data items it needs in advance. If any of the items cannot be obtained, none of the items are locked. The transaction is then rescheduled for execution. The deadlock prevention technique is used in two-phase locking. To prevent any deadlock situation in the system, the DBMS aggressively inspects all the operations, where transactions are about to execute. If it finds that a deadlock situation might occur, then that transaction is never allowed to be executed. Deadlock Prevention Algo 1. Wait-Die scheme 2. Wound wait scheme
  • 165.
    Database Systems Handbook BY:MUHAMMAD SHARIF 165 Note! Deadlock prevention is more strict than Deadlock Avoidance. The algorithms are as follows − Wait-Die − If T1 is older than T2, T1 is allowed to wait. Otherwise, if T1 is younger than T2, T1 is aborted and later restarted. Wait-die: permit older waits for younger Wound-Wait − If T1 is older than T2, T2 is aborted and later restarted. Otherwise, if T1 is younger than T2, T1 is allowed to wait. Wound-wait: permit younger waits for older. Note: In a bulky system, deadlock prevention techniques may work well. Here, we want to develop an algorithm to avoid deadlock by making the right choice all the time Dijkstra's Banker's Algorithm is an approach to trying to give processes as much as possible while guaranteeing no deadlock. safe state -- a state is safe if the system can allocate resources to each process in some order and still avoid a deadlock. Banker's Algorithm for Single Resource Type is a resource allocation and deadlock avoidance algorithm. This name has been given since it is one of most problems in Banking Systems these days. In this, as a new process P1 enters, it declares the maximum number of resources it needs. The system looks at those and checks if allocating those resources to P1 will leave the system in a safe state or not. If after allocation, it will be in a safe state, the resources are allocated to process P1. Otherwise, P1 should wait till the other processes release some resources. This is the basic idea of Banker’s Algorithm. A state is safe if the system can allocate all resources requested by all processes ( up to their stated maximums ) without entering a deadlock state. Resource Preemption: To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those resources to other processes. This method will raise three issues – (a) Selecting a victim: We must determine which resources and which processes are to be preempted and also order to minimize the cost. (b) Rollback: We must determine what should be done with the process from which resources are preempted. One simple idea is total rollback. That means aborting the process and restarting it. (c) Starvation: In a system, the same process may be always picked as a victim. As a result, that process will never complete its designated task. This situation is called Starvation and must be avoided. One solution is that a process must be picked as a victim only a finite number of times.
  • 166.
    Database Systems Handbook BY:MUHAMMAD SHARIF 166 Concurrent vs non-concurrent data access Concurrent executions are done for Better transaction throughput, response time Done via better utilization of resources What is Concurrency Control? Concurrent access is quite easy if all users are just reading data. There is no way they can interfere with one another. Though for any practical Database, it would have a mix of READ and WRITE operations, and hence the concurrency is a challenge. DBMS Concurrency Control is used to address such conflicts, which mostly occur with a multi-user system.
  • 167.
    Database Systems Handbook BY:MUHAMMAD SHARIF 167 Various concurrency control techniques/Methods are: 1. Two-phase locking Protocol 2. Time stamp ordering Protocol 3. Multi-version concurrency control 4. Validation concurrency control Two Phase Locking Protocol is also known as 2PL protocol is a method of concurrency control in DBMS that ensures serializability by applying a lock to the transaction data which blocks other transactions to access the same data simultaneously. Two Phase Locking protocol helps to eliminate the concurrency problem in DBMS. Every 2PL schedule is serializable. Theorem: 2PL ensures/enforce conflict serializability schedule But does not enforce recoverable schedules 2PL rule: Once a transaction has released a lock it is not allowed to obtain any other locks This locking protocol divides the execution phase of a transaction into three different parts. In the first phase, when the transaction begins to execute, it requires permission for the locks it needs. The second part is where the transaction obtains all the locks. When a transaction releases its first lock, the third phase starts. In this third phase, the transaction cannot demand any new locks. Instead, it only releases the acquired locks.
  • 168.
    Database Systems Handbook BY:MUHAMMAD SHARIF 168 The Two-Phase Locking protocol allows each transaction to make a lock or unlock request Growing Phase and Shrinking Phase. 2PL has the following two phases: A growing phase, in which a transaction acquires all the required locks without unlocking any data. Once all locks have been acquired, the transaction is in its locked point. A shrinking phase, in which a transaction releases all locks and cannot obtain any new lock. In practice: – Growing phase is the entire transaction – Shrinking phase is during the commit
  • 169.
    Database Systems Handbook BY:MUHAMMAD SHARIF 169 The 2PL protocol indeed offers serializability. However, it does not ensure that deadlocks do not happen. In the above-given diagram, you can see that local and global deadlock detectors are searching for deadlocks and solving them by resuming transactions to their initial states. Strict Two-Phase Locking Method Strict-Two phase locking system is almost like 2PL. The only difference is that Strict-2PL never releases a lock after using it. It holds all the locks until the commit point and releases all the locks at one go when the process is over. Strict 2PL: All locks held by a transaction are released when the transaction is completed. Strict 2PL guarantees conflict serializability, but not serializability. Centralized 2PL In Centralized 2PL, a single site is responsible for the lock management process. It has only one lock manager for the entire DBMS. Primary copy 2PL Primary copy 2PL mechanism, many lock managers are distributed to different sites. After that, a particular lock manager is responsible for managing the lock for a set of data items. When the primary copy has been updated, the change is propagated to the slaves. Distributed 2PL In this kind of two-phase locking mechanism, Lock managers are distributed to all sites. They are responsible for managing locks for data at that site. If no data is replicated, it is equivalent to primary copy 2PL. Communication costs of Distributed 2PL are quite higher than primary copy 2PL Time-Stamp Methods for Concurrency control: The timestamp is a unique identifier created by the DBMS to identify the relative starting time of a transaction. Typically, timestamp values are assigned in the order in which the transactions are submitted to the system. So, a timestamp can be thought of as the transaction start time. Therefore, time stamping is a method of concurrency control in which each transaction is assigned a transaction timestamp. Timestamps must have two properties namely Uniqueness: The uniqueness property assures that no equal timestamp values can exist. Monotonicity: monotonicity assures that timestamp values always increase. Timestamps are divided into further fields: Granule Timestamps Timestamp Ordering Conflict Resolution in Timestamps Timestamp-based Protocol in DBMS is an algorithm that uses the System Time or Logical Counter as a timestamp to serialize the execution of concurrent transactions. The Timestamp-based protocol ensures that every conflicting read and write operation is executed in timestamp order. The timestamp-based algorithm uses a timestamp to serialize the execution of concurrent transactions. The protocol uses the System Time or Logical Count as a Timestamp. Conflict Resolution in Timestamps: To deal with conflicts in timestamp algorithms, some transactions involved in conflicts are made to wait and abort others. Following are the main strategies of conflict resolution in timestamps: Wait-die: The older transaction waits for the younger if the younger has accessed the granule first. The younger transaction is aborted (dies) and restarted if it tries to access a granule after an older concurrent transaction. Wound-wait: The older transaction pre-empts the younger by suspending (wounding) it if the younger transaction tries to access a granule after an older concurrent transaction.
  • 170.
    Database Systems Handbook BY:MUHAMMAD SHARIF 170 An older transaction will wait for a younger one to commit if the younger has accessed a granule that both want. Timestamp Ordering: Following are the three basic variants of timestamp-based methods of concurrency control: 1. Total timestamp ordering 2. Partial timestamp ordering Multiversion timestamp ordering Multi-version concurrency control Multiversion Concurrency Control (MVCC) enables snapshot isolation. Snapshot isolation means that whenever a transaction would take a read lock on a page, it makes a copy of the page instead, and then performs its operations on that copied page. This frees other writers from blocking due to read lock held by other transactions. Maintain multiple versions of objects, each with its timestamp. Allocate the correct version to reads. Multiversion schemes keep old versions of data items to increase concurrency. The main difference between MVCC and standard locking: read locks do not conflict with write locks ⇒ reading never blocks writing, writing blocks reading Advantage of MVCC locking needed for serializability considerably reduced Disadvantages of MVCC visibility-check overhead (on every tuple read/write) Validation-Based Protocols Validation-based Protocol in DBMS also known as Optimistic Concurrency Control Technique is a method to avoid concurrency in transactions. In this protocol, the local copies of the transaction data are updated rather than the data itself, which results in less interference while the execution of the transaction. Optimistic Methods of Concurrency Control: The optimistic method of concurrency control is based on the assumption that conflicts in database operations are rare and that it is better to let transactions run to completion and only check for conflicts before they commit. The Validation based Protocol is performed in the following three phases: Read Phase Validation Phase Write Phase Read Phase In the Read Phase, the data values from the database can be read by a transaction but the write operation or updates are only applied to the local data copies, not the actual database. Validation Phase In the Validation Phase, the data is checked to ensure that there is no violation of serializability while applying the transaction updates to the database. Write Phase In the Write Phase, the updates are applied to the database if the validation is successful, else; the updates are not applied, and the transaction is rolled back. Laws of concurrency control 1. First Law of Concurrency Control Concurrent execution should not cause application programs to malfunction. 2. Second Law of Concurrency Control Concurrent execution should not have lower throughput or much higher response times than serial execution. Lock Thrashing is the point where system performance(throughput) decreases with increasing load (adding more active transactions). It happens due to the contention of locks. Transactions waste time on lock waits.
  • 171.
    Database Systems Handbook BY:MUHAMMAD SHARIF 171 The default concurrency control mechanism depends on the table type Disk-based tables (D-tables) are by default optimistic. Main-memory tables (M-tables) are always pessimistic. Pessimistic locking (Locking and timestamp) is useful if there are a lot of updates and relatively high chances of users trying to update data at the same time. Optimistic (Validation) locking is useful if the possibility for conflicts is very low – there are many records but relatively few users, or very few updates and mostly read-type operations. Optimistic concurrency control is based on the idea of conflicts and transaction restart while pessimistic concurrency control uses locking as the basic serialization mechanism (it assumes that two or more users will want to update the same record at the same time, and then prevents that possibility by locking the record, no matter how unlikely conflicts are. Properties Optimistic locking is useful in stateless environments (such as mod_plsql and the like). Not only useful but critical. optimistic locking -- you read data out and only update it if it did not change. Optimistic locking only works when developers modify the same object. The problem occurs when multiple developers are modifying different objects on the same page at the same time. Modifying one object may affect the process of the entire page, which other developers may not be aware of. pessimistic locking -- you lock the data as you read it out AND THEN modify it. Lock Granularity: A database is represented as a collection of named data items. The size of the data item chosen as the unit of protection by a concurrency control program is called granularity. Locking can take place at the following level : Database level. Table level(Coarse-grain locking). Page level. Row (Tuple) level. Attributes (fields) level. Multiple Granularity Let's start by understanding the meaning of granularity. Granularity: It is the size of the data item allowed to lock. It can be defined as hierarchically breaking up the database into blocks that can be locked. The Multiple Granularity protocol enhances concurrency and reduces lock overhead. It maintains the track of what to lock and how to lock. It makes it easy to decide either to lock a data item or to unlock a data item. This type of hierarchy can be graphically represented as a tree. There are three additional lock modes with multiple granularities: Intention-shared (IS): It contains explicit locking at a lower level of the tree but only with shared locks. Intention-Exclusive (IX): It contains explicit locking at a lower level with exclusive or shared locks. Shared & Intention-Exclusive (SIX): In this lock, the node is locked in shared mode, and some node is locked in exclusive mode by the same transaction. Compatibility Matrix with Intention Lock Modes: The below table describes the compatibility matrix for these lock modes:
  • 172.
    Database Systems Handbook BY:MUHAMMAD SHARIF 172
  • 173.
    Database Systems Handbook BY:MUHAMMAD SHARIF 173 The phantom problem A database is a collection of static elements like tuples. If tuples are inserted/deleted then the phantom problem appears A “phantom” is a tuple that is invisible during part of a transaction execution but not invisible during the entire execution Even if they lock individual data items, could result in non-serializable execution
  • 174.
    Database Systems Handbook BY:MUHAMMAD SHARIF 174 In our example: – T1: reads the list of products – T2: inserts a new product – T1: re-reads: a new product appears! Dealing With Phantoms Lock the entire table, or Lock the index entry for ‘blue’ – If the index is available Or use predicate locks – A lock on an arbitrary predicate Dealing with phantoms is expensive END
  • 175.
    Database Systems Handbook BY:MUHAMMAD SHARIF 175 CHAPTER 9 RELATIONAL ALGEBRA AND QUERY PROCESSING Relational algebra is a procedural query language. It gives a step-by-step process to obtain the result of the query. It uses operators to perform queries. What is an “Algebra” Answer: Set of operands and operations that are “closed” under all compositions What is the basis of Query Languages? Answer: Two formal Query Languages form the basis of “real” query languages (e.g., SQL) are: 1) Relational Algebra: Operational, it provides a recipe for evaluating the query. Useful for representing execution plans. A language based on operators and a domain of values. The operator's map values are taken from the domain into other domain values. Domain: The set of relations/tables. 2) Relational Calculus: Let users describe what they want, rather than how to compute it. (Nonoperational, Non- Procedural, declarative.) SQL is an abstraction of relational algebra. It makes using it much easier than writing a bunch of math. Effectively, the parts of SQL that directly relate to relational algebra are: SQL -> Relational Algebra Select columns -> Projection Select row -> Selection (Where Clause) INNER JOIN -> Set Union OUTER JOIN -> Set Difference JOIN -> Cartesian Product (when you screw up your join statement) Details Explanation of Relational Operators are the following: Operation (Symbols) Purpose Select(σ) The SELECT operation is used for selecting a subset of the tuples according to a given selection condition (Unary operator) Projection(π) The projection eliminates all attributes of the input relation but those mentioned in the projection list. (Unary operator)/ Projection operator has to eliminate duplicates!
  • 176.
    Database Systems Handbook BY:MUHAMMAD SHARIF 176 Union Operation(∪) UNION is symbolized by the symbol. It includes all tuples that are in tables A or B. Set Difference(-) - Symbol denotes it. The result of A - B, is a relation that includes all tuples that are in A but not in B. Intersection(∩) Intersection defines a relation consisting of a set of all tuples that are in both A and B. Cartesian Product(X) Cartesian operation is helpful to merge columns from two relations. Inner Join Inner join includes only those tuples that satisfy the matching criteria. Theta Join(θ) The general case of the JOIN operation is called a Theta join. It is denoted by the symbol θ. EQUI Join When a theta join uses only an equivalence condition, it becomes an equi join. Natural Join(⋈) Natural join can only be performed if there is a common attribute (column) between the relations. Outer Join In an outer join, along with tuples that satisfy the matching criteria. Left Outer Join( ) In the left outer join, the operation allows keeping all tuples in the left relation. Right Outer join( ) In the right outer join, the operation allows keeping all tuples in the right relation. Full Outer Join( ) In a full outer join, all tuples from both relations are included in the result irrespective of the matching condition.
  • 177.
    Database Systems Handbook BY:MUHAMMAD SHARIF 177
  • 178.
    Database Systems Handbook BY:MUHAMMAD SHARIF 178 Select Operation Notation: ⴋp(r) p is called the selection predicate Project Operation Notation: πA1,..., Ak (r) The result is defined as the relation of k columns obtained by deleting the columns that are not listed
  • 179.
    Database Systems Handbook BY:MUHAMMAD SHARIF 179 Condition join/theta join
  • 180.
    Database Systems Handbook BY:MUHAMMAD SHARIF 180 Union Operation Notation: r Us
  • 181.
    Database Systems Handbook BY:MUHAMMAD SHARIF 181 What is the composition of operators/operations? In general, since the result of a relational-algebra operation is of the same type (relation) as its inputs, relational- algebra operations can be composed together into a relational-algebra expression. Composing relational-algebra operations into relational-algebra expressions is just like composing arithmetic operations (such as −, ∗, and ÷) into arithmetic expressions.
  • 182.
    Database Systems Handbook BY:MUHAMMAD SHARIF 182
  • 183.
    Database Systems Handbook BY:MUHAMMAD SHARIF 183
  • 184.
    Database Systems Handbook BY:MUHAMMAD SHARIF 184 Examples of Relational Algebra
  • 185.
    Database Systems Handbook BY:MUHAMMAD SHARIF 185
  • 186.
    Database Systems Handbook BY:MUHAMMAD SHARIF 186
  • 187.
    Database Systems Handbook BY:MUHAMMAD SHARIF 187
  • 188.
    Database Systems Handbook BY:MUHAMMAD SHARIF 188
  • 189.
    Database Systems Handbook BY:MUHAMMAD SHARIF 189
  • 190.
    Database Systems Handbook BY:MUHAMMAD SHARIF 190
  • 191.
    Database Systems Handbook BY:MUHAMMAD SHARIF 191
  • 192.
    Database Systems Handbook BY:MUHAMMAD SHARIF 192 Relational Calculus There is an alternate way of formulating queries known as Relational Calculus. Relational calculus is a non-procedural query language. In the non-procedural query language, the user is concerned with the details of how to obtain the results. The relational calculus tells what to do but never explains how to do it. Most commercial relational languages are based on aspects of relational calculus including SQL-QBE and QUEL. It is based on Predicate calculus, a name derived from a branch of symbolic language. A predicate is a truth-valued function with arguments.
  • 193.
    Database Systems Handbook BY:MUHAMMAD SHARIF 193 Notations of RC Types of Relational calculus: TRC: Variables range over (i.e., get bound to) tuples. DRC: Variables range over domain elements (= field values Tuple Relational Calculus (TRC) TRC (tuple relation calculus) can be quantified. In TRC, we can use Existential (∃) and Universal Quantifiers (∀) Domain Relational Calculus (DRC) Domain relational calculus uses the same operators as tuple calculus. It uses logical connectives ∧ (and), ∨ (or), and ┓ (not). It uses Existential (∃) and Universal Quantifiers (∀) to bind the variable. The QBE or Query by example is a query language related to domain relational calculus. Differences in RA and RC Sr. No. Key Relational Algebra Relational Calculus 1 Language Type Relational Algebra is a procedural query language. Relational Calculus is a non-procedural or declarative query language. 2 Objective Relational Algebra targets how to obtain the result. Relational Calculus targets what result to obtain. 3 Order Relational Algebra specifies the order in which operations are to be performed. Relational Calculus specifies no such order of executions for its operations. 4 Dependency Relational Algebra is domain-independent. Relational Calculus can be domain dependent. 5 Programming Language Relational Algebra is close to programming language concepts. Relational Calculus is not related to programming language concepts.
  • 194.
    Database Systems Handbook BY:MUHAMMAD SHARIF 194 Differences in TRC and DRC Tuple Relational Calculus (TRC) Domain Relational Calculus (DRC) In TRS, the variables represent the tuples from specified relations. In DRS, the variables represent the value drawn from the specified domain. A tuple is a single element of relation. In database terms, it is a row. A domain is equivalent to column data type and any constraints on the value of data. This filtering variable uses a tuple of the relation. This filtering is done based on the domain of attributes. A query cannot be expressed using a membership condition. A query can be expressed using a membership condition. The QUEL or Query Language is a query language related to it, The QBE or Query-By-Example is query language related to it. It reflects traditional pre-relational file structures. It is more similar to logic as a modeling language. Notation : {T | P (T)} or {T | Condition (T)} Notation : { a1, a2, a3, …, an | P (a1, a2, a3, …, an)} Example : {T | EMPLOYEE (T) AND T.DEPT_ID = 10} Example : { | < EMPLOYEE > DEPT_ID = 10 } Examples of RC:
  • 195.
    Database Systems Handbook BY:MUHAMMAD SHARIF 195 Query Block in RA
  • 196.
    Database Systems Handbook BY:MUHAMMAD SHARIF 196 Query tree plan
  • 197.
    Database Systems Handbook BY:MUHAMMAD SHARIF 197 SQL, Relational Algebra, Tuple Calculus, and domain calculus examples: Comparisons Select Operation R = (A, B) Relational Algebra: σB=17 (r) Tuple Calculus: {t | t ∈ r ∧ B = 17} Domain Calculus: {<a, b> | <a, b> ∈ r ∧ b = 17} Project Operation
  • 198.
    Database Systems Handbook BY:MUHAMMAD SHARIF 198 R = (A, B) Relational Algebra: ΠA(r) Tuple Calculus: {t | ∃ p ∈ r (t[A] = p[A])} Domain Calculus: {<a> | ∃ b ( <a, b> ∈ r )} Combining Operations R = (A, B) Relational Algebra: ΠA(σB=17 (r)) Tuple Calculus: {t | ∃ p ∈ r (t[A] = p[A] ∧ p[B] = 17)} Domain Calculus: {<a> | ∃ b ( <a, b> ∈ r ∧ b = 17)} Natural Join R = (A, B, C, D) S = (B, D, E) Relational Algebra: r ⋈ s Πr.A,r.B,r.C,r.D,s.E(σr.B=s.B ∧ r.D=s.D (r × s)) Tuple Calculus: {t | ∃ p ∈ r ∃ q ∈ s (t[A] = p[A] ∧ t[B] = p[B] ∧ t[C] = p[C] ∧ t[D] = p[D] ∧ t[E] = q[E] ∧ p[B] = q[B] ∧ p[D] = q[D])} Domain Calculus: {<a, b, c, d, e> | <a, b, c, d> ∈ r ∧ <b, d, e> ∈ s}
  • 199.
    Database Systems Handbook BY:MUHAMMAD SHARIF 199
  • 200.
    Database Systems Handbook BY:MUHAMMAD SHARIF 200 Query Processing in DBMS Query Processing is the activity performed in extracting data from the database. In query processing, it takes various steps for fetching the data from the database. The steps involved are: Parsing and translation Optimization Evaluation The query processing works in the following way: Parsing and Translation As query processing includes certain activities for data retrieval. select emp_name from Employee where salary>10000;
  • 201.
    Database Systems Handbook BY:MUHAMMAD SHARIF 201 Thus, to make the system understand the user query, it needs to be translated in the form of relational algebra. We can bring this query in the relational algebra form as: σsalary>10000 (πsalary (Employee)) πsalary (σsalary>10000 (Employee)) After translating the given query, we can execute each relational algebra operation by using different algorithms. So, in this way, query processing begins its working. Query processor Query processor assists in the execution of database queries such as retrieval, insertion, update, or removal of data Key components: Data Manipulation Language (DML) compiler Query parser Query rewriter Query optimizer Query executor Query Processing Workflow Right from the moment the query is written and submitted by the user, to the point of its execution and the eventual return of the results, there are several steps involved. These steps are outlined below in the following diagram.
  • 202.
    Database Systems Handbook BY:MUHAMMAD SHARIF 202 What Does Parsing a Query Mean? The parsing of a query is performed within the database using the Optimizer component. Taking all of these inputs into consideration, the Optimizer decides the best possible way to execute the query. This information is stored within the SGA in the Library Cache – a sub-pool within the Shared Pool. The memory area within the Library Cache in which the information about a query’s processing is kept is called the Cursor. Thus, if a reusable cursor is found within the library cache, it’s just a matter of picking it up and using it to execute the statement. This is called Soft Parsing. If it’s not possible to find a reusable cursor or if the query has never been executed before, query optimization is required. This is called Hard Parsing. Network model with query processing
  • 203.
    Database Systems Handbook BY:MUHAMMAD SHARIF 203
  • 204.
    Database Systems Handbook BY:MUHAMMAD SHARIF 204 Understanding Hard Parsing Hard parsing means that either the cursor was not found in the library cache or it was found but was invalidated for some reason. For whatever reason, Hard Parsing would mean that work needs to be done by the optimizer to ensure the most optimal execution plan for the query. Before the process of finding the best plan is started for the query, some tasks are completed. These tasks are repeatedly executed even if the same query executes in the same session for N number of times: 1. Syntax Check 2. Semantics Check 3. Hashing the query text and generating a hash key-value pair
  • 205.
    Database Systems Handbook BY:MUHAMMAD SHARIF 205 Various phases of query executation in system. First query go from client process to server process and in PGA SQL area then following phases start: 1 Parsing (Parse query tree, (syntax check, semantic check, shared pool check) used for soft parse 2 Transformation (Binding) 3 Estimation/query optimization 4 Plan generation, row source generation 5 Query Execution & plan 6 Query result Index and Table scan in the query execution process
  • 206.
    Database Systems Handbook BY:MUHAMMAD SHARIF 206 Query Evaluation
  • 207.
    Database Systems Handbook BY:MUHAMMAD SHARIF 207 Query Evaluation Techniques for Large Databases The logic applied to the evaluation of SELECT statements, as described here, does not precisely reflect how the DBMS Server evaluates your query to determine the most efficient way to return results. However, by applying this logic to your queries and data, the results of your queries can be anticipated. 1. Evaluate the FROM clause. Combine all the sources specified in the FROM clause to create a Cartesian product (a table composed of all the rows and columns of the sources). If joins are specified, evaluate each join to obtain its results table, and combine it with the other sources in the FROM clause. If SELECT DISTINCT is specified, discard duplicate rows. 2. Apply the WHERE clause. Discard rows in the result table that do not fulfill the restrictions specified in the WHERE clause. 3. Apply the GROUP BY clause. Group results according to the columns specified in the GROUP BY clause. 4. Apply the HAVING clause. Discard rows in the result table that do not fulfill the restrictions specified in the HAVING clause. 5. Evaluate the SELECT clause. Discard columns that are not specified in the SELECT clause. (In case of SELECT FIRST n… UNION SELECT …, the first n rows of the result from the union are chosen.) 6. Perform any unions. Combine result tables as specified in the UNION clause. (In case of SELECT FIRST n… UNION SELECT …, the first n rows of the result from the union are chosen.) 7. Apply for the ORDER BY clause. Sort the result rows as specified. Steps to process a query: parsing, validation, resolution, optimization, plan compilation, execution. The architecture of query engines: Query processing algorithms iterate over members of input sets; algorithms are algebra operators. The physical algebra is the set of operators, data representations, and associated cost functions that the database execution engine supports, while the logical algebra is more related to the data model and expressible queries of the data model (e.g. SQL). Synchronization and transfer between operators are key. Naïve query plan methods include the creation of temporary files/buffers, using one process per operator, and using IPC. The practical method is to implement all operators as a set of procedures (open, next, and close), and have operators schedule each other within a single process via simple function calls. Each time an operator needs another piece of data ("granule"), it calls its data input operator's next function to produce one. Operators structured in such a manner are called iterators. Note: Three SQL relational algebra query plans one pushed, nearly fully pushed Query plans are algebra expressions and can be represented as trees. Left-deep (every right subtree is a leaf), right-deep (every left-subtree is a leaf), and bushy (arbitrary) are the three common structures. In a left-deep tree, each operator draws input from one input and an inner loop integrates over the other input.
  • 208.
    Database Systems Handbook BY:MUHAMMAD SHARIF 208
  • 209.
    Database Systems Handbook BY:MUHAMMAD SHARIF 209 Cost Estimation The cost estimation of a query evaluation plan is calculated in terms of various resources that include: Number of disk accesses. Execution time is taken by the CPU to execute a query. Query Optimization Summary of steps of processing an SQL query: Lexical analysis, parsing, validation, Query Optimizer, Query Code Generator, Runtime Database Processor The term optimization here has the meaning “choose a reasonably efficient strategy” (not necessarily the best strategy) Query optimization: choosing a suitable strategy to execute a particular query more efficiently An SQL query undergoes several stages: lexical analysis (scanning, LEX), parsing (YACC), validation Scanning: identify SQL tokens Parser: check the query syntax according to the SQL grammar Validation: check that all attributes/relation names are valid in the particular database being queried Then create the query tree or the query graph (these are internal representations of the query) Main techniques to implement query optimization  Heuristic rules (to order the execution of operations in a query)  Computing cost estimates of different execution strategies Process for heuristics optimization 1. The parser of a high-level query generates an initial internal representation; 2. Apply heuristics rules to optimize the internal representation. 3. A query execution plan is generated to execute groups of
  • 210.
    Database Systems Handbook BY:MUHAMMAD SHARIF 210 operations based on the access paths available on the files involved in the query.
  • 211.
    Database Systems Handbook BY:MUHAMMAD SHARIF 211 Query optimization Example: Basic algorithms for executing query operations/ query optimization Sorting External sorting is a basic ingredient of relational operators that use sort-merge strategies Sorting is used implicitly in SQL in many situations: Order by clause, join a union, intersection, duplicate elimination distinct.
  • 212.
    Database Systems Handbook BY:MUHAMMAD SHARIF 212 Sorting can be avoided if we have an index (ordered access to the data) External Sorting: (sorting large files of records that don’t fit entirely in the main memory) Internal Sorting: (sorting files that fit entirely in the main memory) All sorting in "real" database systems uses merging techniques since very large data sets are expected. Sorting modules' interfaces should follow the structure of iterators. Exploit the duality of quicksort and mergesort. Sort proceeds in divide phase and combines phase. One of the two phases is based on logical keys (indexes), the physically arranges data items (which phase is logical is particular to an algorithm). Two sub algorithms: one for sorting a run within main memory, another for managing runs on disk or tape. The degree of fan-in (number of runs merged in a given step) is a key parameter. External sorting: The first step is bulk loading the B+ tree index (i.e., sort data entries and records). Useful for eliminating duplicate copies in a collection of records (Why?) Sort-merge join algorithm involves sorting. Hashing Hashing should be considered for equality matches, in general. Hashing-based query processing algos use the in-memory hash table of database objects; if data in the hash table is bigger than the main memory (common case), then hash table overflow occurs. Three techniques for overflow handling exist: Avoidance: input set is partitioned into F files before any in-memory hash table is built. Partitions can be dealt with independently. Partition sizes must be chosen well, or recursive partitioning will be needed. Resolution: assume overflow won't occur; if it does, partition dynamically. Hybrid: like resolution, but when partition, only write one partition to disk, keep the rest in memory. Database tuning
  • 213.
    Database Systems Handbook BY:MUHAMMAD SHARIF 213 END
  • 214.
    Database Systems Handbook BY:MUHAMMAD SHARIF 214 CHAPTER 10 FILE STRUCTURES, INDEXING, AND HASHING Overview: Relative data and information is stored collectively in file formats. A file is a sequence of records stored in binary format. File Organization File Organization defines how file records are mapped onto disk blocks. We have four types of File Organization to organize file records − Sorted Files: Best if records must be retrieved in some order, or only a `range’ of records is needed. Sequential File Organization Store records in sequential order based on the value of the search key of each record. Each record organized by index or key process is called a sequential file organization that would be much faster to find records based on the key. Hashing File Organization A hash function is computed on some attribute of each record; the result specifies in which block of the file the record is placed. Data structures to organize records via trees or hashing on some key Called a hashing file organization. Heap File Organization A record can be placed anywhere in the file where there is space; there is no ordering in the file. Some records are organized randomly Called a heap file organization. Every record can be placed anywhere in the table file, wherever there is space for the record Virtually all databases provide heap file organization. Heap file organized table can search through the entire table file, looking for all rows where the value of account_id is A-591. This is called a file scan. Note: Generally, each relation is stored in a separate file. Clustered File Organization
  • 215.
    Database Systems Handbook BY:MUHAMMAD SHARIF 215 Clustered file organization is not considered good for large databases. In this mechanism, related records from one or more relations are kept in the same disk block, that is, the ordering of records is not based on the primary key or search key. File Operations Operations on database files can be broadly classified into two categories − 1. Update Operations 2. Retrieval Operations Update operations change the data values by insertion, deletion, or update. Retrieval operations, on the other hand, do not alter the data but retrieve them after optional conditional filtering. In both types of operations, selection plays a significant role. Other than the creation and deletion of a file, there could be several operations, which can be done on files. Open − A file can be opened in one of the two modes, read mode or write mode. In read mode, the operating system does not allow anyone to alter data. In other words, data is read-only. Files opened in reading mode can be shared among several entities. Write mode allows data modification. Files opened in write mode can be read but cannot be shared. Locate − Every file has a file pointer, which tells the current position where the data is to be read or written. This pointer can be adjusted accordingly. Using the find (seek) operation, it can be moved forward or backward. Read − By default, when files are opened in reading mode, the file pointer points to the beginning of the file. There are options where the user can tell the operating system where to locate the file pointer at the time of opening a file. The very next data to the file pointer is read. Write − Users can select to open a file in write mode, which enables them to edit its contents. It can be deletion, insertion, or modification. The file pointer can be located at the time of opening or can be dynamically changed if the operating system allows it to do so. Close − This is the most important operation from the operating system’s point of view. When a request to close a file is generated, the operating system removes all the locks (if in shared mode). Tree-Structured Indexing
  • 216.
    Database Systems Handbook BY:MUHAMMAD SHARIF 216
  • 217.
    Database Systems Handbook BY:MUHAMMAD SHARIF 217
  • 218.
    Database Systems Handbook BY:MUHAMMAD SHARIF 218 Indexing Indexing is a data structure technique to efficiently retrieve records from the database files based on some attributes on which the indexing has been done. Indexing in database systems is like what we see in books. Indexing is defined based on its indexing attributes. Indexing can be of the following types − 1. Primary Index − Primary index is defined on an ordered data file. The data file is ordered on a key field. The key field is generally the primary key of the relation. 2. Secondary Index − Secondary index may be generated from a field that is a candidate key and has a unique value in every record, or a non-key with duplicate values. 3 Clustering index-The clustering index is defined on an ordered data file. The data file is ordered on a non- key field. In a clustering index, the search key order corresponds to the sequential order of the records in the data file. If the search key is a candidate key (and therefore unique) it is also called a primary index. 4 Non-Clustering The Non-Clustering indexes are used to quickly find all records whose values in a certain field satisfy some condition. Non-clustering index (different order of data and index). Non-clustering Index
  • 219.
    Database Systems Handbook BY:MUHAMMAD SHARIF 219 whose search key specifies an order different from the sequential order of the file. Non-clustering indexes are also called secondary indexes. Depending on what we put into the index we have a Sparse index (index entry for some tuples only) Dense index (index entry for each tuple) A clustering index is usually sparse(Clustering indexes can be dense or sparse.) A non-clustering index must be dense Ordered Indexing is of two types − 1. Dense Index 2. Sparse Index Dense Index In a dense index, there is an index record for every search key value in the database. This makes searching faster but requires more space to store index records themselves. Index records contain a search key value and a pointer to the actual record on the disk.
  • 220.
    Database Systems Handbook BY:MUHAMMAD SHARIF 220 Sparse Index In a sparse index, index records are not created for every search key. An index record here contains a search key and an actual pointer to the data on the disk. To search a record, we first proceed by index record and reach the actual location of the data. If the data we are looking for is not where we directly reach by following the index, then the system starts a sequential search until the desired data is found. Multilevel Index Index records comprise search-key values and data pointers. The multilevel index is stored on the disk along with the actual database files. As the size of the database grows, so does the size of the indices. There is an immense need to keep the index records in the main memory to speed up the search operations. If the single-level index is used, then a large size index cannot be kept in memory which leads to multiple disk accesses.
  • 221.
    Database Systems Handbook BY:MUHAMMAD SHARIF 221 A multi-level Index helps in breaking down the index into several smaller indices to make the outermost level so small that it can be saved in a single disk block, which can easily be accommodated anywhere in the main memory. B+ Tree A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf nodes of a B+ tree denote actual data pointers. B+ tree ensures that all leaf nodes remain at the same height, thus balanced. Additionally, the leaf nodes are linked using a link list; therefore, a B+ tree can support random access as well as sequential access. Structure of B+ Tree Every leaf node is at an equal distance from the root node. A B+ tree is of the order n where n is fixed for every B+ tree.
  • 222.
    Database Systems Handbook BY:MUHAMMAD SHARIF 222 Internal nodes − Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node. At most, an internal node can contain n pointers. Leaf nodes − Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values. At most, a leaf node can contain n record pointers and n key values. Every leaf node contains one block pointer P to point to the next leaf node and forms a linked list. Hash Organization Hashing uses hash functions with search keys as parameters to generate the address of a data record. Bucket − A hash file stores data in bucket format. The bucket is considered a unit of storage. A bucket typically stores one complete disk block, which in turn can store one or more records. Hash Function − A hash function, h, is a mapping function that maps all the set of search keys K to the address where actual records are placed. It is a function from search keys to bucket addresses. Types of Hashing Techniques There are mainly two types of SQL hashing methods/techniques: 1 Static Hashing 2 Dynamic Hashing/Extendible hashing Static Hashing In static hashing, when a search-key value is provided, the hash function always computes the same address. Static hashing is further divided into: 1. Open hashing 2. Close hashing.
  • 223.
    Database Systems Handbook BY:MUHAMMAD SHARIF 223 Dynamic Hashing or Extendible hashing Dynamic hashing offers a mechanism in which data buckets are added and removed dynamically and on demand. In this hashing, the hash function helps you to create a large number of values. The problem with static hashing is that it does not expand or shrink dynamically as the size of the database grows or shrinks. Dynamic hashing provides a mechanism in which data buckets are added and removed dynamically and on-demand. Dynamic hashing is also known as extended hashing. Key terms when dealing with hashing the records: Bucket Overflow The condition of bucket-overflow is known as a collision. This is a fatal state for any static hash function. In this case, overflow chaining can be used. Overflow Chaining − When buckets are full, a new bucket is allocated for the same hash result and is linked after the previous one. This mechanism is called Closed Hashing. Linear Probing − When a hash function generates an address at which data is already stored, the next free bucket is allocated to it. This mechanism is called Open Hashing. Data bucket – Data buckets are memory locations where the records are stored. It is also known as a Unit of storage. Key: A DBMS key is an attribute or set of an attribute that helps you to identify a row(tuple) in a relation(table). This allows you to find the relationship between two tables.
  • 224.
    Database Systems Handbook BY:MUHAMMAD SHARIF 224 Hash function: A hash function, is a mapping function that maps all the set of search keys to the address where actual records are placed. Linear Probing – Linear probing is a fixed interval between probes. In this method, the next available data block is used to enter the new record, instead of overwriting the older record. Quadratic probing– It helps you to determine the new bucket address. It helps you to add Interval between probes by adding the consecutive output of quadratic polynomial to starting value given by the original computation. Hash index – It is an address of the data block. A hash function could be a simple mathematical function to even a complex mathematical function. Double Hashing –Double hashing is a computer programming method used in hash tables to resolve the issues of a collision. Bucket Overflow: The condition of bucket overflow is called a collision. This is a fatal stage for any static to function. Hashing function h(r) Mapping from the index’s search key to a bucket in which the (data entry for) record r belongs. What is Collision? Hash collision is a state when the resultant hashes from two or more data in the data set, wrongly map the same place in the hash table. How to deal with Hashing Collision? There is two technique that you can use to avoid a hash collision: 1. Rehashing: This method, invokes a secondary hash function, which is applied continuously until an empty slot is found, where a record should be placed. 2. Chaining: The chaining method builds a Linked list of items whose key hashes to the same value. This method requires an extra link field to each table position. An index is an on-disk structure associated with a table or view that speeds the retrieval of rows from the table or view. An index contains keys built from one or more columns in the table or view. Indexes are automatically created when PRIMARY KEY and UNIQUE constraints are defined on table columns. An index on a file speeds up selections on the search key fields for the index. The index is a collection of buckets. Bucket = primary page plus zero or more overflow pages. Buckets contain data entries.
  • 225.
    Database Systems Handbook BY:MUHAMMAD SHARIF 225 Types of Indexes 1 Clustered Index 2 Non-Clustered Index 3 Column Store Index 4 Filtered Index 5 Hash-based Index 6 Dense primary index 7 sparse index 8 b or b+ tree index 9 FK index 10 Secondary index 11 File Indexing – B+ Tree 12 Bitmap Indexing 13 Inverted Index 14 Forward Index 15 Function-based index 16 Spatial index 17 Bitmap Join Index 18 Composite index 19 Primary key index If the search key contains a primary key, then it is called a primary index. 20 Unique index: Search key contains a candidate key. 21 Multilevel index(A multilevel index considers the index file, which we will now refer to as the first (or base) level of a multilevel index, as an ordered file with a distinct value for each K(i)) 22 Inner index: The main index file for the data 23 Outer index: A sparse index on the index
  • 226.
    Database Systems Handbook BY:MUHAMMAD SHARIF 226 END
  • 227.
    Database Systems Handbook BY:MUHAMMAD SHARIF 227 CHAPTER 11 DATABASE USERS AND DATABASE SECURITY MANAGEMENT Overview of User and Schema in Oracle DBMS environment A schema is a collection of database objects, including logical structures such as tables, views, sequences, stored procedures, synonyms, indexes, clusters, and database links. A user owns a schema. A user and a schema have the same name.
  • 228.
    Database Systems Handbook BY:MUHAMMAD SHARIF 228 DBA basic roles and responsibilities Duties of the DBA A Database administrator has some very precisely defined duties which need to be performed by the DBA very religiously. A short account of these jobs is listed below: 1. Schema definition 2. Granting data access 3. Routine Maintenance 4. Backups Management 5. Monitoring jobs running 6. Installation and integration 7. Configuration and migration 8. Optimization and maintenance 9. administration and Customization 10. Upgradation and backup recovery 11. Database storage reorganization 12. Performance monitoring 13. Tablespace and Monitoring disk storage space Roles Category Normally Organization hires DBA in three roles: 1. L1=Junior/fresher dba, having 1–2-year exp. 2. L2=Intermediate dba, having 2+ to 4-year exp. 3. L3=Advanced/Expert dba, having 4+ to 6-year exp. Component modules of a DBMS and their interactions.
  • 229.
    Database Systems Handbook BY:MUHAMMAD SHARIF 229 Create Database user Command The Create User command creates a user. It also automatically creates a schema for that user. The Schema Also Logical Structure to process the data in the Database(Memory Component). It's created automatically by Oracle when the user is created. Create Profile SQL> Create profile clerk limit sessions_per_user 1 idle_time 30 connect_time 600; Create User SQL> Create user dcranney identified by bedrock default tablespace users temporary tablespace temp_ts profile clerk quota 500k on users1 quota 0 on test_ts quota unlimited on users;
  • 230.
    Database Systems Handbook BY:MUHAMMAD SHARIF 230 Roles And Privileges What Is Role Roles are grouping of SYSTEM PRIVILEGES AND/OR OBJECT PRIVILEGES. Managing and controlling privileges is much easier when using roles. You can create roles, grant system and object privilege to the roles and grant roles to the user. Example of Roles: CONNECT, RESOURCE & DBA roles are pre-defined roles. These are created by oracle when the database is created. You can grant these roles when you create a user. Syntax to check roles we use following command: SYS> select * from ROLE_SYS_PRIVS where role='CONNECT'; SYS> select * from ROLE_SYS_PRIVS where role = 'DBA'; Note: A DBA role does NOT include startup & shutdown the databases. Roles are group of privileges under a single name. Those privileges are assigned to users through ROLES. When you adding or deleting a privilege from a role, all users and roles that are assigned that role automatically receive or lose that privilege. Assigning password to role is optional. Whenever you create a role that is NOT IDENTIFIED or IDENTIFIED EXTERNALLY or BY PASSWORD, then oracle grants you the role WITH ADMIN OPTION. If you create a role IDENTIFIED GLOBALLY, then the database does NOT grant you the role. If you omit both NOT IDENTIFIED/IDENTIFIED clause then default goes to NOT IDENTIFIED clause. CREATE A ROLE SYS> create role SHARIF IDENTIFIED BY devdb; GRANTING SYSTEM PRIVILEGES TO A ROLE SYS> GRANT create table, create view, create synonym, create sequence, create trigger to SHARIF; Grant succeeded GRANT A ROLE TO USERS SYS> grant SHARIF to sony, scott;
  • 231.
    Database Systems Handbook BY:MUHAMMAD SHARIF 231 ACTIVATE A ROLE SCOTT> set role SHARIF identified by devdb; TO DISABLING ALL ROLE SCOTT> set role none; GRANT A PRIVILEGE SYS> grant create any table to SHARIF; REVOKE A PRIVILEGE SYS> revoke create any table from SHARIF; SET ALL ROLES ASSIGNED TO scott AS DEFAULT SYS> alter user scott default role all; SYS> alter user scott default role SHARIF;
  • 232.
    Database Systems Handbook BY:MUHAMMAD SHARIF 232 Grants and revoke Privileges/Role/Objects to users Sql> grant insert, update, delete, select on hr. employees to Scott; Grant succeeded. Sql> grant insert, update, delete, select on hr.departments to Scott; Grant succeeded. Sql> grant flashback on hr. employees to Scott; Grant succeeded. Sql> grant flashback on hr.departments to Scott; Grant succeeded. Sql> grant select any transaction to Scott; Sql> Grant create any table,alter/select/insert/update/delete/drop any table to dba/sharif; Grant succeeded. SHAM> grant all on EMP to SCOTT; Grant succeeded. SHAM> grant references on EMP to SCOTT; Grant succeeded. Sql> Revoke all suppliers from the public; SHAM> revoke all on EMP from SCOTT; SHAM> revoke references on EMP from SCOTT CASCADE CONSTRAINTS; Grant succeeded. SHAM> grant select on EMP to PUBLIC; SYS> grant create session to PUBLIC; Grant succeeded. Note: If a privilege has been granted to PUBLIC, all users in the database can use it. Note: Public acts like a ROLE, sometimes acts like a USER. Note: NOTE: Is there DROP TABLE PRIVILEGE in oracle? NO. DROP TABLE is NOT a PRIVILEGE. What is Privilege Privilege is special right or permission. Privileges are granted to perform operations in a database. Example of Privilege: CREATE SESSION privilege is used to a user connect to the oracle database. The syntax for revoking privileges on a table in oracle is: Revoke privileges on the object from a user; Privileges can be assigned to a user or a role. Privileges are given to users with GRANT command and taken away with REVOKE command. There are two distinct type of privileges. 1. SYSTEM PRIVILEGES (Granted by DBA like ALTER DATABASE, ALTER SESSION, ALTER SYSTEM, CREATE USER) 2. SCHEMA OBJECT PRIVILEGES. SYSTEM privileges are NOT directly related to any specific object or schema. Two type of users can GRANT, REVOKE SYSTEM PRIVILEGES to others.  User who have been granted specific SYSTEM PRIVILEGE WITH ADMIN OPTION.  User who have been granted GRANT ANY PRIVILEGE. You can GRANT and REVOKE system privileges to the users and roles. Powerful system Privileges DBA, SYSDBA, SYSOPER(Roles or Privilleges); SYS, SYSTEM (tablespace or user)
  • 233.
    Database Systems Handbook BY:MUHAMMAD SHARIF 233
  • 234.
    Database Systems Handbook BY:MUHAMMAD SHARIF 234 OBJECT privileges are directly related to specific object or schema.  GRANT -> To assign privileges or roles to a user, use GRANT command.  REVOKE -> To remove privileges or roles from a user, use REVOKE command. Object privilege is the permission to perform certain action on a specific schema objects, including tables, views, sequence, procedures, functions, packages.
  • 235.
    Database Systems Handbook BY:MUHAMMAD SHARIF 235  SYSTEM PRIVILEGES can be granted WITH ADMIN OPTION.  OBJECT PRIVILEGES can be granted WITH GRANT OPTION.
  • 236.
    Database Systems Handbook BY:MUHAMMAD SHARIF 236
  • 237.
    Database Systems Handbook BY:MUHAMMAD SHARIF 237 Admin And Grant Options With ADMIN Option (to USER, Role) SYS> select * from dba_sys_privs where grantee in('A','B','C'); GRANTEE PRIVILEGE ADM ------------------------------------------------ C CREATE SESSION YES Note: By default ADM column in dba-sys_privs is NO. If you revoke a SYSTEM PRIVILEGE from a user, it has NO IMPACT on GRANTS that user has made. With GRANT Opetion (to USER, Role)
  • 238.
    Database Systems Handbook BY:MUHAMMAD SHARIF 238 SONY can access user sham.emp table because SELECT PRIVILEGE given to ‘PUBLIC’. So that sham.emp is available to everyone of the database. SONY has created a view EMP_VIEW based on sham.emp. Note: If you revoke OBJECT PRIVILEGE from a user, that privilege also revoked to whom it was granted.
  • 239.
    Database Systems Handbook BY:MUHAMMAD SHARIF 239 Note: If you grant RESOURCE role to the user, this privilege overrides all explicit tablespace quotas. The UNLIMITED TABLESPACE system privilege lets the user allocate as much space in any tablespaces that make up the database. Database account locks and unlock Alter user admin identified by admin account lock; Select u.username from all_users u where u.username like 'info'; Database security and non-database(non database ) security
  • 240.
    Database Systems Handbook BY:MUHAMMAD SHARIF 240 END
  • 241.
    Database Systems Handbook BY:MUHAMMAD SHARIF 241 CHAPTER 12 BUSINESS INTELLIGENCE TERMINOLOGIES IN DATABASE SYSTEMS Overview: Database systems are used for processing day-to-day transactions, such as sending a text or booking a ticket online. This is also known as online transaction processing (OLTP). Databases are good for storing information about and quickly looking up specific transactions. Decision support systems (DSS) are generally defined as the class of warehouse system that deals with solving a semi-structured problem. DSS DSS helps businesses make sense of data so they can undergo more informed management decision-making. It has three branches DWH, OLAP, and DM. I will discuss this in detail below. Characteristics of a decision support system DSS frameworks typically consist of three main components or characteristics: The model management system: Uses various algorithms in creating, storing, and manipulating data models The user interface: The front-end program enables end users to interact with the DSS The knowledge base: A collection or summarization of all information including raw data, documents, and personal knowledge What is a data warehouse? A data warehouse is a collection of multidimensional, organization-wide data, typically used in business decision- making. Data warehouse toolkits for building out these large repositories generally use one of two architectures. Different approaches to building a data warehouse concentrate on the data storage layer: Inmon’s approach – designing centralized storage first and then creating data marts from the summarized data warehouse data and metadata. Type is Normalized. Focuses on data reorganization using relational database management systems (RDBMS) Holds simple relational data between a core data repository and data marts, or subject-oriented databases Ad-hoc SQL queries needed to access data are simple
  • 242.
    Database Systems Handbook BY:MUHAMMAD SHARIF 242 Kimball’s approach – creating data marts first and then developing a data warehouse database incrementally from independent data marts. Type is Denormalized. Focuses on infrastructure functionality using multidimensional database management systems (MDBMS) like star schema or snowflake schema Data Warehouse vs. Transactional System Following are a few differences between Data Warehouse and Operational Database (Transaction System) A transactional system is designed for known workloads and transactions like updating a user record, searching a record, etc. However, DW transactions are more complex and present a general form of data. A transactional system contains the current data of an organization whereas DW normally contains historical data. The transactional system supports the parallel processing of multiple transactions. Concurrency control and recovery mechanisms are required to maintain the consistency of the database. An operational database query allows to read and modify operations (delete and update), while an OLAP query needs only read-only access to stored data (select statement). DW involves data cleaning, data integration, and data consolidations. DW has a three-layer architecture − Data Source Layer, Integration Layer, and Presentation Layer. The following diagram shows the common architecture of a Data Warehouse system.
  • 243.
    Database Systems Handbook BY:MUHAMMAD SHARIF 243 Types of Data Warehouse System Following are the types of DW systems − 1. Data Mart 2. Online Analytical Processing (OLAP) 3. Online Transaction Processing (OLTP) 4. Predictive Analysis Three-Tier Data Warehouse Architecture Generally, a data warehouse adopts a three-tier architecture. Following are the three tiers of the data warehouse architecture. Bottom Tier − The bottom tier of the architecture is the data warehouse database server. It is a relational database system. We use the back-end tools and utilities to feed data into the bottom tier. These back-end tools and utilities perform the Extract, Clean, Load, and refresh functions. Middle Tier − In the middle tier, we have the OLAP Server that can be implemented in either of the following ways. By Relational OLAP (ROLAP), which is an extended relational database management system. The ROLAP maps the operations on multidimensional data to standard relational operations. By Multidimensional OLAP (MOLAP) model, directly implements the multidimensional data and operations. Top-Tier − This tier is the front-end client layer. This layer holds the query tools and reporting tools, analysis tools, and data mining tools. The following diagram depicts the three-tier architecture of the data warehouse −
  • 244.
    Database Systems Handbook BY:MUHAMMAD SHARIF 244 Data Warehouse Models From the perspective of data warehouse architecture, we have the following data warehouse models − Virtual Warehouse 1. Data mart 2. Enterprise Warehouse 3. Virtual Warehouse The view over an operational data warehouse is known as a virtual warehouse. It is easy to build a virtual warehouse. Building a virtual warehouse requires excess capacity on operational database servers.
  • 245.
    Database Systems Handbook BY:MUHAMMAD SHARIF 245 Building A Data Warehouse From Scratch: A Step-By-Step Plan Step 1. Goals elicitation Step 2. Conceptualization and platform selection Step 3. Business case and project roadmap Step 4. System analysis and data warehouse architecture design Step 5. Development and stabilization Step 6. Launch Step 7. After-launch support
  • 246.
    Database Systems Handbook BY:MUHAMMAD SHARIF 246 Data Mart A data mart(s) can be created from an existing data warehouse—the top-down approach—or other sources, such as internal operational systems or external data. Similar to a data warehouse, it is a relational database that stores transactional data (time value, numerical order, reference to one or more objects) in columns and rows making it easy to organize and access. Data marts and data warehouses are both highly structured repositories where data is stored and managed until it is needed. Data marts are designed for a specific line of business and DWH is designed for enterprise-wide range use. The data mart is >100 and DWH is >100 and the Data mart is a single subject but DWH is a multiple subjects repository. Data marts are independent data marts and dependent data marts. Data mart contains a subset of organization-wide data. This subset of data is valuable to specific groups of an organization.
  • 247.
    Database Systems Handbook BY:MUHAMMAD SHARIF 247 Fact and Dimension Tables Type of facts Explanation Additive Measures should be added to all dimensions. Semi-Additive In this type of fact, measures may be added to some dimensions and not to others. Non-Additive It stores some basic units of measurement of a business process. Some real-world examples include sales, phone calls, and orders. Types of Dimensions Definition Conformed Dimensions Conformed dimensions are the very fact to which it relates. This dimension is used in more than one-star schema or Datamart. Outrigger Dimensions A dimension may have a reference to another dimension table. These secondary dimensions are called outrigger dimensions. This kind of Dimension should be used carefully. Shrunken Rollup Dimensions Shrunken Rollup dimensions are a subdivision of rows and columns of a base dimension. These kinds of dimensions are useful for developing aggregated fact tables. Dimension-to- Dimension Table Joins Dimensions may have references to other dimensions. However, these relationships can be modeled with outrigger dimensions. Role-Playing Dimensions A single physical dimension helps to reference multiple times in a fact table as each reference links to a logically distinct role for the dimension. Junk Dimensions It is a collection of random transactional codes, flags, or text attributes. It may not logically belong to any specific dimension. Degenerate Dimensions A degenerate dimension is without a corresponding dimension. It is used in the transaction and collecting snapshot fact tables. This kind of dimension does not have its dimension as it is derived from the fact table. Swappable Dimensions They are used when the same fact table is paired with different versions of the same dimension.
  • 248.
    Database Systems Handbook BY:MUHAMMAD SHARIF 248 Type of facts Explanation Step Dimensions Sequential processes, like web page events, mostly have a separate row in a fact table for every step in a process. It tells where the specific step should be used in the overall session. Extract Transform Load Tool configuration (ETL/ELT) Successful data migration includes: Extracting the existing data. Transforming data so it matches the new formats. Cleansing the data to address any quality issues. Validating the data to make sure the move goes as planned. Loading the data into the new system. Staging area
  • 249.
    Database Systems Handbook BY:MUHAMMAD SHARIF 249 ETL Cycle Flow ETL to Data warehouse, OLAP, Business Reporting Tiers
  • 250.
    Database Systems Handbook BY:MUHAMMAD SHARIF 250 Types of Data Warehouse Extraction Methods There are two types of data warehouse extraction methods: Logical and Physical extraction methods. 1. Logical Extraction The logical Extraction method in turn has two methods: i) Full Extraction For example, exporting a complete table in the form of a flat file. ii) Incremental Extraction In incremental extraction, the changes in source data need to be tracked since the last successful extraction. 2. Physical Extraction Physical extraction has two methods: Online and Offline extraction: i) Online Extraction In this process, the extraction process directly connects to the source system and extracts the source data. ii) Offline Extraction The data is not extracted directly from the source system but is staged explicitly outside the source system. Data Capture Data capture is an advanced extraction process. It enables the extraction of data from documents, converting it into machine-readable data. This process is used to collect important organizational information when the source systems are in the form of paper/electronic documents (receipts, emails, contacts, etc.) OLAP Model and Its types Online Analytical Processing (OLAP) is a tool that enables users to perform data analysis from various database systems simultaneously. Users can use this tool to extract, query, and retrieve data. OLAP enables users to analyze the collected data from diverse points of view. There are three main types of OLAP servers as follows: ROLAP stands for Relational OLAP, an application based on relational DBMSs. MOLAP stands for Multidimensional OLAP, an application based on multidimensional DBMSs. HOLAP stands for Hybrid OLAP, an application using both relational and multidimensional techniques. OLAP Architecture has these three components of each type: Database server. Rolap/molap/holap server. Front-end tool.
  • 251.
    Database Systems Handbook BY:MUHAMMAD SHARIF 251
  • 252.
    Database Systems Handbook BY:MUHAMMAD SHARIF 252 Characteristics of OLAP In the FASMI characteristics of OLAP methods, the term derived from the first letters of the characteristics are: Fast It defines which system is targeted to deliver the most feedback to the client within about five seconds, with the elementary analysis taking no more than one second and very few taking more than 20 seconds. Analysis It defines which method can cope with any business logic and statistical analysis that is relevant for the function and the user, and keep it easy enough for the target client. Although some preprogramming may be needed we do not think it acceptable if all application definitions have to allow the user to define new Adhoc calculations as part of the analysis and to document the data in any desired method, without having to program so we exclude products (like Oracle Discoverer) that do not allow the user to define new Adhoc calculation as part of the analysis and to document on the data in any desired product that do not allow adequate end user-oriented calculation flexibility. Share It defines which the system tools all the security requirements for understanding and, if multiple write connection is needed, concurrent update location at an appropriated level, not all functions need the customer to write data back, but for the increasing number which does, the system should be able to manage multiple updates in a timely, secure manner. Multidimensional This is the basic requirement. OLAP system must provide a multidimensional conceptual view of the data, including full support for hierarchies, as this is certainly the most logical method to analyze businesses and organizations.
  • 253.
    Database Systems Handbook BY:MUHAMMAD SHARIF 253 OLAP Operations Since OLAP servers are based on a multidimensional view of data, we will discuss OLAP operations in multidimensional data. Here is the list of OLAP operations − 1. Roll-up 2. Drill-down 3. Slice and dice 4. Pivot (rotate)
  • 254.
    Database Systems Handbook BY:MUHAMMAD SHARIF 254 Roll-up Roll-up performs aggregation on a data cube in any of the following ways − By climbing up a concept hierarchy for a dimension By dimension reduction The following diagram illustrates how roll-up works.
  • 255.
    Database Systems Handbook BY:MUHAMMAD SHARIF 255 Roll-up is performed by climbing up a concept hierarchy for the dimension location. Initially the concept hierarchy was "street < city < province < country". On rolling up, the data is aggregated by ascending the location hierarchy from the level of the city to the level of the country. The data is grouped into cities rather than countries. When roll-up is performed, one or more dimensions from the data cube are removed. Drill-down Drill-down is the reverse operation of roll-up. It is performed in either of the following ways − By stepping down a concept hierarchy for a dimension By introducing a new dimension. The following diagram illustrates how drill-down works −
  • 256.
    Database Systems Handbook BY:MUHAMMAD SHARIF 256 Drill-down is performed by stepping down a concept hierarchy for the dimension time. Initially, the concept hierarchy was "day < month < quarter < year." On drilling down, the time dimension descended from the level of the quarter to the level of the month. When drill-down is performed, one or more dimensions from the data cube are added. It navigates the data from less detailed data to highly detailed data. Slice The slice operation selects one particular dimension from a given cube and provides a new sub-cube. Consider the following diagram that shows how a slice works. Here Slice is performed for the dimension "time" using the criterion time = "Q1".
  • 257.
    Database Systems Handbook BY:MUHAMMAD SHARIF 257 It will form a new sub-cube by selecting one or more dimensions. Dice Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider the following diagram that shows the dice operation. The dice operation on the cube based on the following selection criteria involves three dimensions. (location = "Toronto" or "Vancouver") (time = "Q1" or "Q2") (item =" Mobile" or "Modem") Pivot The pivot operation is also known as rotation. It rotates the data axes in view to provide an alternative presentation of data. Consider the following diagram that shows the pivot operation.
  • 258.
    Database Systems Handbook BY:MUHAMMAD SHARIF 258 Data mart also have Hybrid Data Marts A hybrid data mart combines data from an existing data warehouse and other operational source systems. It unites the speed and end-user focus of a top-down approach with the benefits of the enterprise-level integration of the bottom-up method. Data mining techniques There are many techniques used by data mining technology to make sense of your business data. Here are a few of the most common: Association rule learning: Also known as market basket analysis, association rule learning looks for interesting relationships between variables in a dataset that might not be immediately apparent, such as determining which products are typically purchased together. This can be incredibly valuable for long-term planning. Classification: This technique sorts items in a dataset into different target categories or classes based on common features. This allows the algorithm to neatly categorize even complex data cases. Clustering: This approach groups similar data in a cluster. The outliers may be undetected or they will fall outside the clusters. To help users understand the natural groupings or structure within the data, you can apply the process of partitioning a dataset into a set of meaningful sub-classes called clusters. This process looks at all the objects in the dataset and groups them together based on similarity to each other, rather than on predetermined features. Modeling is what people often think of when they think of data mining. Modeling is the process of taking some data (usually) and building a model that reflects that data. Usually, the aim is to address a specific problem through modeling the world in some way and from the model develop a better understanding of the world. Decision tree: Another method for categorizing data is the decision tree. This method asks a series of cascading questions to sort items in the dataset into relevant classes. Regression: This technique is used to predict a range of numeric values, such as sales, temperatures, or stock prices, based on a particular data set.
  • 259.
    Database Systems Handbook BY:MUHAMMAD SHARIF 259 Here data can be made smooth by fitting it to a regression function. The regression used may be linear (having one independent variable) or multiple (having multiple independent variables). Regression is a technique that conforms data values to a function. Linear regression involves finding the “best” line to fit two attributes (or variables) so that one attribute can be used to predict the other. Outer detection: This type of data mining technique refers to the observation of data items in the dataset which do not match an expected pattern or expected behavior. This technique can be used in a variety of domains, such as intrusion, detection, fraud or fault detection, etc. Outer detection is also called Outlier Analysis or Outlier mining. Sequential Patterns: This data mining technique helps to discover or identify similar patterns or trends in transaction data for a certain period. Prediction: Where the end user can predict the most repeated things. Knowledge Extraction from Business intelligence techniques
  • 260.
    Database Systems Handbook BY:MUHAMMAD SHARIF 260
  • 261.
    Database Systems Handbook BY:MUHAMMAD SHARIF 261 Data quality and data management components
  • 262.
    Database Systems Handbook BY:MUHAMMAD SHARIF 262
  • 263.
    Database Systems Handbook BY:MUHAMMAD SHARIF 263
  • 264.
    Database Systems Handbook BY:MUHAMMAD SHARIF 264
  • 265.
    Database Systems Handbook BY:MUHAMMAD SHARIF 265
  • 266.
    Database Systems Handbook BY:MUHAMMAD SHARIF 266 Steps/tasks Involved in Data Preprocessing 1 Data Cleaning: The data can have many irrelevant and missing parts. To handle this part, data cleaning is done. It involves handling missing data, noisy data, etc. Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies 2 Data Transformation: This step is taken to transform the data into appropriate forms suitable for the mining process. 3 Data discretization Part of data reduction but with particular importance especially for numerical data 4 Data Reduction: Since data mining is a technique that is used to handle a huge amount of data. While working with a huge volume of data, analysis became harder in such cases. To get rid of this, we use the data reduction technique. It aims to increase storage efficiency and reduce data storage and analysis costs. 5 Data integration Integration of multiple databases, data cubes, or files Method of treating missing data 1 Ignoring and discarding data 2 Fill in the missing value manually 3 Use the global constant to fill the mission values 4 Imputation using mean, median, or mod, 5 Replace missing values using a prediction/ classification model 6 K-Nearest Neighbor (k-NN) approach (The best approach)
  • 267.
    Database Systems Handbook BY:MUHAMMAD SHARIF 267 Difference between Data steward and Data curator: Information Retrieval (IR) can be defined as a software program that deals with the organization, storage, retrieval, and evaluation of information from document repositories, particularly textual information. An Information Retrieval (IR) model selects and ranks the document that is required by the user or the user has asked for in the form of a query. Information Retrieval Data Retrieval The software program deals with the organization, storage, retrieval, and evaluation of information from document repositories, particularly textual information. Data retrieval deals with obtaining data from a database management system such as ODBMS. It is A process of identifying and retrieving the data from the database, based on the query provided by the user or application.
  • 268.
    Database Systems Handbook BY:MUHAMMAD SHARIF 268 Information Retrieval Data Retrieval Retrieves information about a subject. Determines the keywords in the user query and retrieves the data. Small errors are likely to go unnoticed. A single error object means total failure. Not always well structured and is semantically ambiguous. Has a well-defined structure and semantics. Does not provide a solution to the user of the database system. Provides solutions to the user of the database system. The results obtained are approximate matches. The results obtained are exact matches. Results are ordered by relevance. Results are unordered by relevance. It is a probabilistic model. It is a deterministic model. Techniques of Information retrieval: 1. Traditional system 2. Non-traditional system. There are three types of Information Retrieval (IR) models: 1. Classical IR Model 2. Non-Classical IR Model 3. Alternative IR Model Let’s understand the classical IR models in further detail: 1. Boolean Model — This model required information to be translated into a Boolean expression and Boolean queries. The latter is used to determine the information needed to be able to provide the right match when the Boolean expression is found to be true. It uses Boolean operations AND, OR, NOT to create a combination of multiple terms based on what the user asks. 2. Vector Space Model — This model takes documents and queries denoted as vectors and retrieves documents depending on how similar they are. This can result in two types of vectors which are then used to rank search results either 3. Probability Distribution Model — In this model, the documents are considered as distributions of terms, and queries are matched based on the similarity of these representations. This is made possible using entropy or by computing the probable utility of the document. Probability distribution model types:  Similarity-based Probability Distribution Model  Expected-utility-based Probability Distribution Model
  • 269.
    Database Systems Handbook BY:MUHAMMAD SHARIF 269
  • 270.
    Database Systems Handbook BY:MUHAMMAD SHARIF 270 END
  • 271.
    Database Systems Handbook BY:MUHAMMAD SHARIF 271 CHAPTER 13 DBMS INTEGRATION WITH BPMS Overview: BPMS,which are significant extensions of workflow management (WFM). DBMS and BPMS should be used simultaneously they give better performance. BPMS takes or holds operational data and DBMS holds transactional and log data but BPMS will hold All the transactional data go through BPMS. BPMS is run at the execution level. BPMS also holds document flow data. A key element of BPMN is the choice of shapes and icons used for the graphical elements identified in this specification. The intent is to create a standard visual language that all process modelers will recognize and understand. An implementation that creates and displays BPMN Process Diagrams SHALL use the graphical elements, shapes, and markers illustrated in this specification. Six Sigma is another set of practices that originate from manufacturing, in particular from engineering and production practices at Motorola. The main characteristic of Six Sigma is its focus on the minimization of defects (errors). Six Sigma places a strong emphasis on measuring the output of processes or activities, especially in terms of quality. Six Sigma encourages managers to systematically compare the effects of improvement initiatives on the outputs. Sigma symbolizes a single standard deviation from the mean. The two main Six Sigma methodologies are DMAIC and DMADV. Each has its own set of recommended procedures to be implemented for business transformation. DMAIC is a data-driven method used to improve existing products or services for better customer satisfaction. It is the acronym for the five phases: D – Define, M – Measure, A – Analyse, I – Improve, C – Control. DMAIC is applied in the manufacturing of a product or delivery of a service. DMADV is a part of the Design for Six Sigma (DFSS) process used to design or re-design different processes of product manufacturing or service delivery. The five phases of DMADV are: D – Define, M – Measure, A – Analyse, D – Design, V – Validate. A business process is a collection of related, structured activities that produce a specific service or a particular goal for a particular person(s). Business Process management (BPM) includes methods, techniques, and software to design, enact, control and analyze operational processes The BPM lifecycle is considered to have five stages: design, model, execute, monitor, optimize, and Process reengineering. The difference between BP and BPMS is defined as BPM is a discipline that uses various methods to discover, model, analyze, measure, improve, and optimize business processes. BPM is a method, technique, or way of being/doing and BPMS is a collection of technologies to help build software systems or applications to automate processes. BPMS is a software tool used to improve an organization’s business processes through the definition, automation, and analysis of business processes. It also acts as a valuable automation tool for businesses to generate a competitive advantage through cost reduction, process excellence, and continuous process improvement. As BPM is a discipline
  • 272.
    Database Systems Handbook BY:MUHAMMAD SHARIF 272 used by organizations to identify, document, and improve their business processes; BPMS is used to enable aspects of BPM. Enactable business process model Curtisetal list five modeling goals: to facilitate human understanding andcommunication; to support process improvement; to support process management; toautomate process guidance; and to automate execution support. We suggest that thesegoals plus our additional goals of to automate process execution and to automateprocess management, are the goals of using a BPMS. These goals, which form aprogression from problem description to solution design and then action, would beimpossible to achieve without a process model.This is because an enactable model gives a BPMS a limited decision-making ability,the ability to generate change request signals to other sub-systems, or team“members,” and the ability to take account of endogenous or exogenous changes toitself, the business processes it manages or the environment. Together these abilitiesenable the BPMS to make automatic changes to business processes within a scopelimited to the cover of its decision rules, the control privileges of its change requestsignals and its ability to recognize patterns from its sensors.
  • 273.
    Database Systems Handbook BY:MUHAMMAD SHARIF 273 Business Process Modeling Notation (BPMN) BPMS has elements, label, token, activity, case, event process, sequence symbols, etc
  • 274.
    Database Systems Handbook BY:MUHAMMAD SHARIF 274 BPMN Task A logical unit of work that is carried out as a single whole Resource A person or a machine that can perform specific tasks Activity -the performance of a task by a resource Case A sequence of activities performed to achieve some goal, an order, an insurance claim, a car assembly Work item The combination of a case and a task that is just to be carried out Process Describes how a particular category of cases shall be managed Control flow construct ->sequence, selection, iteration, parallelisation
  • 275.
    Database Systems Handbook BY:MUHAMMAD SHARIF 275 BPMN concepts Events Things that happen instantaneously (e.g. an invoice Activities Units of work that have a duration (e.g. an activity to Process, events, and activities are logically related Sequence The most elementary form of relation is Sequence, which implies that one event or activity A is followed by another event or activity B. Start event Circles used with a thin border End event Circles used with a thick border Label Give a name or label to each activity and event Token Once a process instance has been spawned/born, we use a token to identify the progress (or state) of that instance. Gateway There is a gating mechanism that either allows or disallows the passage of tokens through the gateway Split gateway A point where the process flow diverges Have one incoming sequence flow and multiple outgoing sequence flows (representing the branches that diverge) Join gateway A point where the process flow converges Mutually exclusive Only one of them can be true every time the XOR split is reached by a token Exclusive (XOR) split To model the relation between two or more alternative activities, like in the case of the approval or rejection of a claim. Exclusive (XOR) join To merge two or more alternative branches that may have previously been forked with an XOR-split Indicated with an empty diamond or empty diamond marked with an “X” Naming/Label Conventions in BPMN: The label will begin with a verb followed by a noun. The noun may be preceded by an adjective The verb may be followed by a complement to explain how the action is being done.
  • 276.
    Database Systems Handbook BY:MUHAMMAD SHARIF 276 The flow of a process with Big Database
  • 277.
    Database Systems Handbook BY:MUHAMMAD SHARIF 277
  • 278.
    Database Systems Handbook BY:MUHAMMAD SHARIF 278
  • 279.
    Database Systems Handbook BY:MUHAMMAD SHARIF 279 CHAPTER 14 RAID STRUCTURE AND MEMORY MANAGEMENT Redundant Arrays of Independent Disks RAID, or “Redundant Arrays of Independent Disks” is a technique that makes use of a combination of multiple disks instead of using a single disk for increased performance, data redundancy, reliability, or both. The term was coined by David Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987. Disk Array: Arrangement of several disks that gives abstraction of a single, large disk. RAID techniques:
  • 280.
    Database Systems Handbook BY:MUHAMMAD SHARIF 280 Details of RAID Structure
  • 281.
    Database Systems Handbook BY:MUHAMMAD SHARIF 281
  • 282.
    Database Systems Handbook BY:MUHAMMAD SHARIF 282 Row farmat and column format in oracle In-memory Structure In memory storage Index
  • 283.
    Database Systems Handbook BY:MUHAMMAD SHARIF 283 In memory Compresson in storage Storage manager Components
  • 284.
    Database Systems Handbook BY:MUHAMMAD SHARIF 284 Database systems Memory Components 1. CPU Registers s 2. Cache 3. Main memory 4. Flash memory (SSD-solid state disk) (Also known as EEPROM (Electrically Erasable Programmable Read- Only Memory)) 5. Magnetic disk (Hard disks vs. floppy disks) 6. Optical disk (CD-ROM, CD-RW, DVD-RW, and DVD-RAM) 7. Tape storage
  • 285.
    Database Systems Handbook BY:MUHAMMAD SHARIF 285
  • 286.
    Database Systems Handbook BY:MUHAMMAD SHARIF 286
  • 287.
    Database Systems Handbook BY:MUHAMMAD SHARIF 287
  • 288.
    Database Systems Handbook BY:MUHAMMAD SHARIF 288 Performance measures of hard disks/ Accessing a Disk Page 1. Access time: the time it takes from when a read or write request is issued to when the data transfer begins. Is composed of: Time to access (read/write) a disk block:  Seek time (moving arms to position disk head on track)  Rotational delay/latency (waiting for the block to rotate under the head)  Data transfer time/rate (moving data to/from disk surface) Seek time and rotational delay dominate.  Seek time varies from about 2 to 15mS  Rotational delay from 0 to 8.3mS (have 4.2mS)  The transfer rate is about 3.5mS per 256Kb page Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions? 2. Data-transfer rate: the rate at which data can be retrieved from or stored on disk (e.g., 25-100 MB/s) 3. Mean time to failure (MTTF): average time the disk is expected to run continuously without any failure BLOCK vs Page vs Sectors
  • 289.
    Database Systems Handbook BY:MUHAMMAD SHARIF 289 Block Page Sectors Block is also a sequence of bits and bytes A page is made up of unit blocks or groups of blocks. A sector is a physical spot on a formatted disk that hold a info. A block is made up of a contiguous sequence of sectors from a single track.. No fix size. Pages have fixed sizes, usually 2k or 4k or 8k. Each sector can hold 512 bytes of data A block is also called a physical record on hard drives and floppies Recards that have no fixed size depends on the data types of columns Any data transferred between the hard disk and the RAM is usually sent in blocks . The default NTFS Block size is 4096 bytes. Pages are virtual blocks A disk can read/write a page faster. Each block/page consists of some records. Pages manage data that is stored in RAM. 4 tuples fit in one block if the block size is 2 kb and 30 tuples fit on 1 block if the block size is 8kb. Smallest unit of logical memory, it is used to read a file or write data to a file or physical memory unit called page. A block is virtual memory unit that stores tables rows and records logically in its segments and A page is a physical memory unit that store data physically in disk file A page is loaded into the processor from the main memory. A hard disk plate has many concentric circles on it, called tracks. Every track is further divided into sectors. Page/block: processing with pages is easier/faster than the block It is also called variable length records having complex structure. Fixed length records, inflexible structure in memory. OS prefer page not block but both are storage units. If I insert a new row/record it will come in a block/page if the existing block/page has space. Otherwise, it assigned a new block within the file.
  • 290.
    Database Systems Handbook BY:MUHAMMAD SHARIF 290
  • 291.
    Database Systems Handbook BY:MUHAMMAD SHARIF 291 Block Diagram depicting paging. Page Map Table(PMT) contains pages from page number 0 to 7 Pinned block: Memory block that is not allowed to be written back to disk. Toss immediate strategy: Frees the space occupied by a block as soon as the final tuple of that block has been processed Example: We can say if we have an employee table and have email, name, CNIC... Empid = 12 bytes, name = 59 bytes, CNIC = 15 bytes.... so all employee table columns are 230 bytes. Its means each row in the employee table have of 230 bytes. So its means we can store around 2 rows in one block. For example, say your hard drive has a block size of 4K, and you have a 4.5K file. This requires 8K to store on your hard drive (2 whole blocks), but only 4.5K on a floppy (9 floppy-size blocks). Example:
  • 292.
    Database Systems Handbook BY:MUHAMMAD SHARIF 292 Buffer Manager/Buffer management Buffer: Portion of main memory available to store copies of disk blocks.
  • 293.
    Database Systems Handbook BY:MUHAMMAD SHARIF 293 Buffer Manager: Subsystem that is responsible for buffering disk blocks in main memory. The overall goal is to minimize the number of disk accesses. A buffer manager is similar to a virtual memory manager of an operating system.
  • 294.
    Database Systems Handbook BY:MUHAMMAD SHARIF 294 Architecture: The buffer manager stages pages from external storage to the main memory buffer pool. File and index layers make calls to the buffer manager. What is the steal approach in DBMS? What are the Buffer Manager Policies/Roles? Data storage on disk?
  • 295.
    Database Systems Handbook BY:MUHAMMAD SHARIF 295 Note: Buffer manager moves pages between the main memory buffer pool (volatile memory) from the external storage disk (in non-volatile storage). When execution starts, the file and index layer make the call to the buffer manager. The steal approach is used when the buffer manager replaces an existing page in the cache, that has been updated by a transaction not yet committed, by another page requested by another transaction. No-force. The force rule means that REDO will never be needed during recovery since any committed transaction will have all its updates on disk before it is committed. The deferred update ( NO-UNDO ) recovery scheme a no-steal approach. However, typical database systems employ a steal/no-force strategy. The advantage of steel is that it avoids the need for very large buffer space. Steal/No-Steal Similarly, it would be easy to ensure atomicity with a no-steal policy. The no-steal policy states that pages cannot be evicted from memory (and thus written to disk) until the transaction commits. Need support for undo: removing the effects of an uncommitted transaction on the disk Force/No Force Durability can be a very simple property to ensure if we use a force policy. The force policy states when a transaction executes, force all modified data pages to disk before the transaction commits.
  • 296.
    Database Systems Handbook BY:MUHAMMAD SHARIF 296 Preferred Policy: Steal/No-Force This combination is most complicated but allows for the highest flexibility/performance. STEAL (why enforcing Atomicity is hard, complicates enforcing Atomicity) NO FORCE (why enforcing Durability is hard, complicates enforcing Durability) In case of no force Need support for a redo: complete a committed transaction’s writes on disk. Disk Access File: A file is logically a sequence of records, where a record is a sequence of fields; The buffer manager stages pages from external storage to the main memory buffer pool. File and index layers make calls to the buffer manager. The hard disk is also called secondary memory. Which is used to store data permanently. This is non-volatile File scans can be made fast with read-ahead (track-at-a-crack). Requires contiguous file allocation, so may need to bypass OS/file system. Sorted files: records are sorted by search key. Good for equality and range search. Hashed files: records are grouped into buckets by search key. Good for equality search. Disks: Can retrieve random page at a fixed cost Tapes: Can only read pages sequentially Database tables and indexes may be stored on a disk in one of some forms, including ordered/unordered flat files, ISAM, heap files, hash buckets, or B+ trees. The most used forms are B-trees and ISAM.
  • 297.
    Database Systems Handbook BY:MUHAMMAD SHARIF 297 Data on a hard disk is stored in microscopic areas called magnetic domains on the magnetic material. Each domain stores either 1 or 0 values. When the computer is switched off, then the head is lifted to a safe zone normally termed a safe parking zone to prevent the head from scratching against the data zone on a platter when the air bearing subsides. This process is called parking. The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for backups whereas, the magnetic disk is used as secondary storage. Dynamic Storage-Allocation Problem/Algorithms Memory allocation is a process by which computer programs are assigned memory or space. It is of four types: First Fit Allocation The first hole that is big enough is allocated to the program. In this type fit, the partition is allocated, which is the first sufficient block from the beginning of the main memory. Best Fit Allocation The smallest hole that is big enough is allocated to the program. It allocates the process to the partition that is the first smallest partition among the free partitions. Worst Fit Allocation The largest hole that is big enough is allocated to the program. It allocates the process to the partition, which is the largest sufficient freely available partition in the main memory. Next Fit allocation: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the last allocation point. Note: First-fit and best-fit better than worst-fit in terms of speed and storage utilization Static and Dynamic Loading: To load a process into the main memory is done by a loader. There are two different types of loading : Static loading:- loading the entire program into a fixed address. It requires more memory space. Dynamic loading:- The entire program and all data of a process must be in physical memory for the process to execute. So, the size of a process is limited to the size of physical memory. Methods Involved in Memory Management There are various methods and with their help Memory Management can be done intelligently by the Operating System:  Fragmentation As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remain unused. This problem is known as Fragmentation. Fragmentation Category −
  • 298.
    Database Systems Handbook BY:MUHAMMAD SHARIF 298 1. External fragmentation Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used. 2. Internal fragmentation The memory block assigned to the process is bigger. Some portion of memory is left unused, as it cannot be used by another process. Two types of fragmentation are possible 1. Horizontal fragmentation 2. Vertical Fragmentation Reconstruction of Hybrid Fragmentation The original relation in hybrid fragmentation is reconstructed by performing union and full outer join. 3. Hybrid fragmentation can be achieved by performing horizontal and vertical partitions together. 4. Mixed fragmentation is a group of rows and columns in relation.
  • 299.
    Database Systems Handbook BY:MUHAMMAD SHARIF 299 Reduce external fragmentation by compaction ● Shuffle memory contents to place all free memory together in one large block ● Compaction is possible only if relocation is dynamic, and is done at execution time ● I/O problem - Latch job in memory while it is involved in I/O - Do I/O only into OS buffers  Segmentation Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is a different logical address space of the program or A segment is a logical unit. Segmentation with Paging Both paging and segmentation have their advantages and disadvantages, it is better to combine these two schemes to improve on each. The combined scheme is known as 'Page the Elements'. Each segment in this scheme is divided into pages and each segment is maintained in a page table. So the logical address is divided into the following 3 parts: Segment numbers(S) Page number (P) The displacement or offset number (D)
  • 300.
    Database Systems Handbook BY:MUHAMMAD SHARIF 300 As shown in the following diagram, the Intel 386 uses segmentation with paging for memory management with a two-level paging scheme
  • 301.
    Database Systems Handbook BY:MUHAMMAD SHARIF 301  Swapping Swapping is a mechanism in which a process can be swapped temporarily out of the main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to the main memory. Though performance is usually affected by the swapping process it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction. Note: Bring a page into memory only when it is needed. The same page may be brought into memory several times  Paging A page is also a unit of data storage. A page is loaded into the processor from the main memory. A page is made up of unit blocks or groups of blocks. Pages have fixed sizes, usually 2k or 4k. A page is also called a virtual page or memory page. When the transfer of pages occurs between main memory and secondary memory it is known as paging. Paging is a memory management technique in which process address space is broken into blocks of the same size called pages (size is the power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the number of pages. Divide logical memory into blocks of the same size called pages. Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external fragmentation. Divide physical memory into fixed-sized blocks called frames (size is the power of 2, between 512 bytes and 8192 bytes) The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for backups whereas, the magnetic disk is used as secondary storage.
  • 302.
    Database Systems Handbook BY:MUHAMMAD SHARIF 302 Hard disk stores information in the form of magnetic fields. Data is stored digitally in the form of tiny magnetized regions on the platter where each region represents a bit. Microsoft SQL Server databases are stored on disk in two files: a data file and a log file Note: To run a program of size n pages, need to find n free frames and load the program Implementation of Page Table The page table is kept in the main memory  Page-table base register (PTBR) points to the page table  Page-table length register (PRLR) indicates the size of the page table In this scheme, every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problems can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) The flow of Tasks in memory The program must be brought into memory and placed within a process for it to be run.
  • 303.
    Database Systems Handbook BY:MUHAMMAD SHARIF 303 Collection of processes on the disk that are waiting to be brought into memory to run the program. Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location knew a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). Multistep Processing of a User Program In memory is as follows: The concept of a logical address space that is bound to separate physical address space is central to proper memory management Logical address – generated by the CPU; also referred to as virtual address Physical address – address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in the execution-time address-binding scheme The user program deals with logical addresses; it never sees the real physical addresses The logical address space of a process can be noncontiguous; the process is allocated physical memory whenever the latter is available
  • 304.
    Database Systems Handbook BY:MUHAMMAD SHARIF 304 Address Translation Architecture END
  • 305.
    Database Systems Handbook BY:MUHAMMAD SHARIF 305 CHAPTER 15 ORACLE DATABASE FUNDAMENTAL AND ITS ADMINISTRATION Oracle Database History I will use Oracle tool in this book. Oracle Versions and Its meaning 1. Oracle Database 11g 2. Oracle Database 12c 3. Oracle 18c (new name) = Oracle Database 12c Release 2 12.2.0.2 (Patch Set for 12c Release 2). 4. Oracle 19c (new name) = Oracle Database 12c Release 2 12.2.0.3 (Terminal Patch Set for Release Oracle releases in oracle history. Tools/utilities for administoring database Oracle dba  Oracle Universal Installer (Utility that install oracle software, it start O-DBCA to install oracle softwar)  Oracle DBCA (Utility, it create database from templates, it also enable to to create ODB from seed database)  Database Upgrade Assistant (tool, upgrade as Oracle newest release)  Net Configuration Assistant (NETCA as short, tool, enable to configure listener)  Oracle enterprise manager database control(Product, control database by web-based interface, performance advisors)
  • 306.
    Database Systems Handbook BY:MUHAMMAD SHARIF 306 Oracle DB editions are hierarchically broken down as follows: Enterprise Edition: Offers all features, including superior performance and security, and is the most robust Personal Edition: Nearly the same as the Enterprise Edition, except it does not include the Oracle Real Application Clusters option Standard Edition: Contains base functionality for users that do not require Enterprise Edition’s robust package Express Edition (XE): The lightweight, free and limited Windows, and Linux edition Oracle Lite: For mobile devices Database Instance/ Oracle Instance A Database Instance is an interface between client applications (users) and the database. An Oracle instance consists of three main parts: System Global Area (SGA), Program Global Area (PGA), and background processes. Searches for a server parameter file in a platform-specific default location and, if not found, for a text initialization parameter file (specifying STARTUP with the SPFILE or PFILE parameters overrides the default behavior) Reads the parameter file to determine the values of initialization parameters. Allocates the SGA based on the initialization parameter settings. Starts the Oracle background processes. Opens the alert log and trace files and writes all explicit parameter settings to the alert log in valid parameter syntax
  • 307.
    Database Systems Handbook BY:MUHAMMAD SHARIF 307 Oracle Database creates server processes to handle the requests of user processes connected to an instance. A server process can be either of the following: A dedicated server process, which services only one user process. A shared server process, which can service multiple user processes. We can see the listener has the default name of "LISTENER" and is listening for TCP connections on port 1521. The listener process is started when the server is started (or whenever the instance is started). The listener is only required for connections from other servers, and the DBA performs the creation of the listener process. When a new connection comes in over the network, the listener passes the connection to Oracle.
  • 308.
    Database Systems Handbook BY:MUHAMMAD SHARIF 308
  • 309.
    Database Systems Handbook BY:MUHAMMAD SHARIF 309 MainDatabase shutting down conditions Shutdown Normal | Transactional | Immediate | Abort Database startup conditions: Startup restrict | Startup mount restrict | Startup force |Startup nomount |Startup mount | Open Read only modes: Alter database open read-only Alter database open; Details of shutting down conditions: Shutdown /shut/shutdown normal: 1. New connections are not allowed 2. Connected users can perform an ongoing transaction 3. Idle sessions will not be disconnected 4. When connected users log out manually then the database gets shut down. 5. It is also a graceful shutdown, So it doesn’t require ICR in the next startup. 6. A common scn number will be updated to control files and data files before the database shutdown. Shutdown Transnational: 1. New connections are not allowed 2. Connected users can perform an ongoing transaction 3. Idle sessions will be disconnected 4. The database gets shutdown once ongoing tx’s get completed(commit/rollback) Hence, It is also a graceful shutdown, So it doesn’t require ICR in the next startup. Shutdown immediate: 1. New connections are not allowed
  • 310.
    Database Systems Handbook BY:MUHAMMAD SHARIF 310 2. Connected uses can’t perform an ongoing transaction 3. Idle sessions will be disconnected 4. Oracle performs rollback’s the ongoing Tx’s(uncommitted) and the database gets shutdown. 5. A common scn number will be updated to control files and data files before the database shutdown. Hence, It is also a graceful shutdown, So it doesn’t require ICR in the next startup. Shutdown Abort: 1. New connections are not allowed 2. Connected uses can’t perform an ongoing transaction 3. Idle sessions will be disconnected 4. Db gets shutdown abruptly (NO Commit /No Rollback) Hence, It is an abrupt shutdown, So it requires ICR in the next startup.
  • 311.
    Database Systems Handbook BY:MUHAMMAD SHARIF 311
  • 312.
    Database Systems Handbook BY:MUHAMMAD SHARIF 312
  • 313.
    Database Systems Handbook BY:MUHAMMAD SHARIF 313 Types of Standby Databases 1. Physical Standby Database 2. Snapshot Standby Database 3. Logical Standby Database Physical Standby Database A physical standby database is physically identical to the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The physical standby database is updated by performing recovery using redo data that is received from the primary database. Oracle Database12c enables a physical standby database to receive and apply redo while it is open in read-only mode. Logical Standby Database A logical standby database contains the same logical information (unless configured to skip certain objects) as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database by transforming the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database. This is done with the use of LogMiner technology on the redo data received from the primary database. The tables in a logical standby database can be used simultaneously for recovery and other tasks such as reporting, summations, and queries. A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. A standby database is a database replica created from a backup of a primary database. By applying archived redo logs from the primary database to the standby database, you can keep the two databases synchronized. A standby database has the following main purposes: 1. Disaster protection 2. Protection against data corruption
  • 314.
    Database Systems Handbook BY:MUHAMMAD SHARIF 314 Snapshot Standby Database A snapshot standby database is a database that is created by converting a physical standby database into a snapshot standby database. The snapshot standby database receives redo from the primary database but does not apply the redo data until it is converted back into a physical standby database. The snapshot standby database can be used
  • 315.
    Database Systems Handbook BY:MUHAMMAD SHARIF 315 for updates, but those updates are discarded before the snapshot standby database is converted back into a physical standby database. The snapshot standby database is appropriate when you require a temporary, updatable version of a physical standby database. What is Cloning? Database Cloning is a procedure that can be used to create an identical copy of the existing Oracle database. DBAs occasionally need to clone databases to test backup and recovery strategies or export a table that was dropped from the production database and import it back into the production databases. Cloning can be done on a different host or the same host even if it is different from the standby database. Database Cloning can be done using the following methods, Cold Cloning Hot Cloning RMAN Cloning The basic memory structures associated with Oracle Database include: System global area (SGA) The SGA is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. All server and background processes share the SGA. Examples of data stored in the SGA include cached data blocks and shared SQL areas. Program global area (PGA) A PGA is a nonshared memory region that contains data and control information exclusively for use by an Oracle process. Oracle Database creates the PGA when an Oracle process starts. One PGA exists for each server process and background process. The collection of individual PGAs is the total instance PGA or instance PGA. Database initialization parameters set the size of the instance PGA, not individual PGAs. User global area (UGA) The UGA is memory associated with a user session. Software code areas Software code areas are portions of memory used to store code that is being run or can be run. Oracle Database code is stored in a software area that is typically at a different location from user programs—a more exclusive or protected location. Oracle Initialization Parameter
  • 316.
    Database Systems Handbook BY:MUHAMMAD SHARIF 316
  • 317.
    Database Systems Handbook BY:MUHAMMAD SHARIF 317
  • 318.
    Database Systems Handbook BY:MUHAMMAD SHARIF 318
  • 319.
    Database Systems Handbook BY:MUHAMMAD SHARIF 319
  • 320.
    Database Systems Handbook BY:MUHAMMAD SHARIF 320 Oracle Database Logical Storage Structure Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks, extents, and segments. The Relationships Among Segments, Extents, Data Blocks in the data file, Oracle block, and OS block:
  • 321.
    Database Systems Handbook BY:MUHAMMAD SHARIF 321 Oracle Block: At the finest level of granularity, Oracle stores data in data blocks (also called logical blocks, Oracle blocks, or pages). One data block corresponds to a specific number of bytes of physical database space on a disk. Oracle Extent: The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. It can be spared over two tablespaces. Oracle Segment: The level of logical database storage greater than an extent is called a segment. A segment is a set of extents, each of which has been allocated for a specific data structure and all of which are stored in the same tablespace. For example, each table's data is stored in its data segment, while each index's data is stored in its index segment. If the table or index is partitioned, each partition is stored in its segment.
  • 322.
    Database Systems Handbook BY:MUHAMMAD SHARIF 322
  • 323.
    Database Systems Handbook BY:MUHAMMAD SHARIF 323 Data block: Oracle manages the storage space in the data files of a database in units called data blocks. A data block is the smallest unit of data used by a database. Oracle block and data block are equal in data storage by logical and physical respectively like table's (logical) data is stored in its data segment. The high water mark is the boundary between used and unused space in a segment. Operating system block: The data consisting of the data block in the data files are stored in operating system blocks. OS Page: The smallest unit of storage that can be atomically written to non-volatile storage is called a page Details of Data storage in Oracle Blocks: An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In the Figure above, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks. A segment is a set of extents allocated for a specific database object, such as a table. For example, the data for the employee's table is stored in its data segment, whereas each index for employees is stored in its index segment. Every database object that consumes storage consists of a single segment.
  • 324.
    Database Systems Handbook BY:MUHAMMAD SHARIF 324
  • 325.
    Database Systems Handbook BY:MUHAMMAD SHARIF 325
  • 326.
    Database Systems Handbook BY:MUHAMMAD SHARIF 326 A big file tablespace eases database administration because it consists of only one data file. The a single data file can be up to 128TB (terabytes) in size if the tablespace block size is 32KB; if you use the more common 8KB block size, 32TB is the maximum size of a big file tablespace.
  • 327.
    Database Systems Handbook BY:MUHAMMAD SHARIF 327 Broad View of Logical and Physical Structure of Database System in Oracle. Oracle Database must use logical space management to track and allocate the extents in a tablespace. When a database object requires an extent, the database must have a method of finding and providing it. Similarly, when an object no longer requires an extent, the database must have a method of making the free extent available. Oracle Database manages space within a tablespace based on the type that you create.
  • 328.
    Database Systems Handbook BY:MUHAMMAD SHARIF 328 You can create either of the following types of tablespaces: Locally managed tablespaces (default) The database uses bitmaps in the tablespaces themselves to manage extents. Thus, locally managed tablespaces have a part of the tablespace set aside for a bitmap. Within a tablespace, the database can manage segments with automatic segment space management (ASSM) or manual segment space management (MSSM). Dictionary-managed tablespaces The database uses the data dictionary to manage the exten. Oracle Physical Storage Structure Oracle Database Memory Management Memory management involves maintaining optimal sizes for the Oracle instance memory structures as demands on the database change. Oracle Database manages memory based on the settings of memory-related initialization parameters. The basic options for memory management are as follows: Automatic memory management You specify the target size for the database instance memory. The instance automatically tunes to the target memory size, redistributing memory as needed between the SGA and the instance PGA.
  • 329.
    Database Systems Handbook BY:MUHAMMAD SHARIF 329 Automatically shared memory management This management model is partially automated. You set a target size for the SGA and then have the option of setting an aggregate target size for the PGA or managing PGA work areas individually. Manual memory management Instead of setting the total memory size, you set many initialization parameters to manage components of the SGA and instance PGA individually. SGA (System Global Area) is an area of memory (RAM) allocated when an Oracle Instance starts up. The SGA's size and function are controlled by initialization (INIT.ORA or SPFILE) parameters. In general, the SGA consists of the following subcomponents, as can be verified by querying the V$SGAINFO: SELECT FROM v$sgainfo; The common components are: Data buffer cache - cache data and index blocks for faster access. Shared pool - cache parsed SQL and PL/SQL statements. Dictionary Cache - information about data dictionary objects. Redo Log Buffer - committed transactions that are not yet written to the redo log files. JAVA pool - caching parsed Java programs. Streams pool - cache Oracle Streams objects. Large pool - used for backups, UGAs, etc.
  • 330.
    Database Systems Handbook BY:MUHAMMAD SHARIF 330 Automatic Shared Memory Management simplifies the configuration of the SGA and is the recommended memory configuration. To use Automatic Shared Memory Management, set the SGA_TARGET initialization parameter to a nonzero value and set the STATISTICS_LEVEL initialization parameter to TYPICAL or ALL. The value of the SGA_TARGET parameter should be set to the amount of memory that you want to dedicate to the SGA. In response to the workload on the system, the automatic SGA management distributes the memory appropriately for the following memory pools: 1. Database buffer cache (default pool) 2. Shared pool 3. Large pool 4. Java pool 5. Streams pool
  • 331.
    Database Systems Handbook BY:MUHAMMAD SHARIF 331
  • 332.
    Database Systems Handbook BY:MUHAMMAD SHARIF 332 Oracle database Files and ASM FILES COMPARISONS: END
  • 333.
    Database Systems Handbook BY:MUHAMMAD SHARIF 333 CHAPTER 16 DATABASE BACKUPS AND RECOVERY, LOGS MANAGEMENT Overview of Backup Solutions in Oracle Several circumstances can halt the operation of an Oracle database.
  • 334.
    Database Systems Handbook BY:MUHAMMAD SHARIF 334 There are two ways to perform a data backup in Oracle Backups are divided into physical backups and logical backups. Logical Backups contain logical data (for example, tables and stored procedures) extracted with the Oracle Export utility and stored in a binary file. You can use logical backups to supplement physical backups. Backup sets are logical entities produced by the RMAN BACKUP command. Oracle Recovery Manager (RMAN) It's done by server session (Restore files, Backup data Files, Recover Data files). It's also recommended. A user can log in to RMAN and command it to back up a database. RMAN can write backup sets to disk and tape cold backup (offline database backup). User-managed Backup SQLPlus and OS Commands by starting from the beginning null end; Exporting and Importing Data: SQL Commands Command-line utilities (Logical backup) 1. Data Pump Export and Data Pump Import (These are called Logical backup) 2. Export and Import (These are called Logical backup) Physical backups Physical backups, which are the primary concern in a backup and recovery strategy, are copies of physical database files. You can make physical backups with either the Oracle Recovery Manager (RMAN) utility or operating system utilities. These are copies of physical database files. For example, a physical backup might copy database content from a local disk drive to another secure location. Physical backup Types (cold, hot, full, incremental) During an Oracle tablespace hot backup, you (or your script) puts a tablespace into backup mode, then copy the data files to disk or tape, then take the tablespace out of backup mode. Hot backup - also known as dynamic or online backup, is a backup performed on data while the database is actively online and accessible to users. Cold backup—Users cannot modify the database during a cold backup, so the database and the backup copy are always synchronized. Cold backup is used only when the service level allows for the required system downtime.
  • 335.
    Database Systems Handbook BY:MUHAMMAD SHARIF 335 Full—Creates a copy of data that can include parts of a database such as the control file, transaction files (redo logs), tablespaces, archive files, and data files. Regular cold full physical backups are recommended. The database must be in archive log mode for a full physical backup. Incremental—Captures only changes made after the last full physical backup. Incremental backup can be done with a hot backup. Cold-full backup - A cold-full backup is when the database is shut down, all of the physical files are backed up, and the database is started up again. Cold-partial backup - A cold-partial backup is used when a full backup is not possible due to some physical constraints. Hot-full backup - A hot-full backup is one in which the database is not taken off-line during the backup process. Rather, the tablespace and data files are put into a backup state. Hot-partial backup - A hot-partial backup is one in which the database is not taken off-line during the backup process, plus different tablespaces are backed up on different nights. Consistent and Inconsistent Backups A consistent backup is one in which the files being backed up contain all changes up to the same system change number (SCN). This means that the files in the backup contain all the data taken from the same point in time. Unlike an inconsistent backup, a consistent whole database backup does not require recovery after it is restored. An inconsistent backup is a backup of one or more database files that you make while the database is open or after the database has shut down abnormally. Image Backup/mirror backup A full image backup, or mirror backup, is a replica of everything on your computer's hard drive, from the operating system, boot information, apps, and hidden files to your preferences and settings. Imaging software not only captures individual files but everything you need to get your system running again. Image copies are exact byte- for-byte copies of files. RMAN prefers to use an image copy over a backup set.
  • 336.
    Database Systems Handbook BY:MUHAMMAD SHARIF 336 Restore Database backup by: If you use SQL*Plus, then you can run the RECOVER command to perform recovery. If you use RMAN, then you run the RMAN RECOVER command to perform recovery. Flashback in Oracle is a set of tools that allow System Administrators and users to view and even manipulate the past state of data without having to recover to a fixed point in time. Using the flashback command, we can pull a table out of the recycle bin. The Flashback is complete; this way, we restore the table. At the physical level, Oracle Flashback Database provides a more efficient data protection alternative to database point-in-time recovery (DBPITR). If the current data files have unwanted changes, then you can use the RMAN command FLASHBACK DATABASE to revert the data files to their contents at a past time. Database Exports/Imports Data Pump Export the HR schema to a dump file named schema.DMP by issuing the following command at the system command prompt: EXPDP SYSTEM/PASSWORD SCHEMAS=HR DIRECTORY=DMPDIR DUMPFILE=SCHEMA.DMP LOGFILE=EXPSCHEMA.LOG IMPDP USER/PASSWORD@DB_NAME DIRECTORY=DATA_PUMP_DIR DUMPFILE=DUMP_NAME.DMP SCHEMAS=EMR FROMUSER=MIS TOUSER=EMR Cash recovery and Log-Based Recovery The log is a sequence of records. The log of each transaction is maintained in some stable storage so that if any failure occurs, then it can be recovered from there.
  • 337.
    Database Systems Handbook BY:MUHAMMAD SHARIF 337 Log management and its type Log: An ordered list of REDO/UNDO actions Log record contains: <XID, pageID, offset, length, old data, new data> and additional control info. The fields are: XID: transaction ID - tells us which transaction did this operation pageID: what page has been modified offset: where on the page the data started changing (typically in bytes) length: how much data was changed (typically in bytes) old data: what the data was originally (used for undo operations) new data: what the data has been updated to (used for redo operations) Data item identifier:
  • 338.
    Database Systems Handbook BY:MUHAMMAD SHARIF 338
  • 339.
    Database Systems Handbook BY:MUHAMMAD SHARIF 339 Checkpoint The checkpoint is like a bookmark. While the execution of the transaction, such checkpoints are marked, and the transaction is executed then using the steps of the transaction, the log files will be created. Checkpoint declares a point before which all the logs are stored permanently in the storage disk and are in an inconsistent state. In the case of crashes, the amount of work and time is saved as the system can restart from the checkpoint. Checkpointing is a quick way to limit the number of logs to scan on recovery.
  • 340.
    Database Systems Handbook BY:MUHAMMAD SHARIF 340 Store the LSN of the most recent checkpoint at a master record on a disk
  • 341.
    Database Systems Handbook BY:MUHAMMAD SHARIF 341
  • 342.
    Database Systems Handbook BY:MUHAMMAD SHARIF 342 System Catalog A repository of information describing the data in the database (metadata, data about data)
  • 343.
    Database Systems Handbook BY:MUHAMMAD SHARIF 343 Data Replication Replication is the process of copying and maintaining database objects in multiple databases that make up a distributed database system. Replication can improve the performance and protect the availability of applications because alternate data access options exist.
  • 344.
    Database Systems Handbook BY:MUHAMMAD SHARIF 344 Oracle provides its own set of tools to replicate Oracle and integrate it with other databases. In this post, you will explore the tools provided by Oracle as well as open-source tools that can be used for Oracle database replication by implementing custom code. The catalog is needed to keep track of the location of each fragment & replica Data replication techniques Synchronous vs. asynchronous Synchronous: all replicas are up-to-date Asynchronous: cheaper but delay in synchronization Regarding the timing of data transfer, there are two types of data replication: Asynchronous replication is when the data is sent to the model server -- the server where the replicas take data from the client. Then, the model server pings the client with a confirmation saying the data has been received. From there, it goes about copying data to the replicas at an unspecified or monitored pace. Synchronous replication is when data is copied from the client-server to the model server and then replicated to all the replica servers before the client is notified that data has been replicated. This takes longer to verify than the asynchronous method, but it presents the advantage of knowing that all data was copied before proceeding. Asynchronous database replication offers flexibility and ease of use, as replications happen in the background. Methods to Setup Oracle Database Replication You can easily set up the Oracle Database Replication using the following methods: Method 1: Oracle Database Replication Using Hevo Data Method 2: Oracle Database Replication Using A Full Backup And Load Approach Method 3: Oracle Database Replication Using a Trigger-Based Approach Method 4: Oracle Database Replication Using Oracle Golden Gate CDC Method 5: Oracle Database Replication Using Custom Script-Based on Binary Log Oracle types of data replication and integration in OLAP Three main architectures: Consolidation database: All data is moved into a single database and managed from a central location. Oracle Real Application Clusters (Oracle RAC), Grid computing, and Virtual Private Database (VPD) can help you consolidate information into a single database that is highly available, scalable, and secure. Federation: Data appears to be integrated into a single virtual database while remaining in its current distributed locations. Distributed queries, distributed SQL, and Oracle Database Gateway can help you create a federated database. Sharing Mediation: Multiple copies of the same information are maintained in multiple databases and application data stores. Data replication and messaging can help you share information at multiple databases.
  • 345.
    Database Systems Handbook BY:MUHAMMAD SHARIF 345 END
  • 346.
    Database Systems Handbook BY:MUHAMMAD SHARIF 346 CHAPTER 17 PREREQUISITES OF STORAGE MANAGEMENT AND ORACLE INSTALLATION Overview of Hardware Requirements Hardware requirements you must meet before installing Oracle Management Service (OMS), a standalone Oracle Management Agent (Management Agent), and Oracle Management Repository (Management Repository). Physical memory (RAM)=> 256 MB minimum; 512 MB recommended, On Windows Vista, the minimum requirement is 512 MB Virtual memory=> Double the amount of RAM Disk space=> Basic Installation Type total: 2.04 GB, advanced Installation Types total: 1.94 GB Video adapter=> 256 colors Processor=> 550 MHz minimum, On Windows Vista, the minimum requirement is 800 MHz In particular, here I will discuss the following: 1. CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS 2. CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent 3. CPU, RAM, and Hard Disk Space Requirements for Management Repository CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS Host Small Medium Large CPU Cores/Host 2 4 8 RAM 4 GB 6 GB 8 GB RAM with ADPFoot 1 , JVMDFoot 2 6GB 10 GB 14 GB Oracle WebLogic Server JVM Heap Size 512 MB 1 GB 2 GB Hard Disk Space 7 GB 7 GB 7 GB Hard Disk Space with ADP, JVMD 10 GB 12 GB 14 GB Note: While installing an additional OMS (by cloning an existing one), if you have installed BI publisher on the source host, then ensure that you have 7 GB of additional hard disk space on the destination host, so a total of 14 GB. CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent For a standalone Oracle Management Agent, ensure that you have 2 CPU cores per host, 512 MB of RAM, and 1 GB of hard disk space. CPU, RAM, and Hard Disk Space Requirements for Management Repository In this table RAM and Hard Disk Space Requirements for Management Repository Host Small Medium Large CPU Cores/HostFoot 1 2 4 8 RAM 4 GB 6 GB 8 GB Hard Disk Space 50 GB 200 GB 400 GB Oracle database Hardware Component Requirements for Windows x64 The following table lists the hardware components that are required for Oracle Database on Windows x64. Windows x64 Minimum Hardware Requirements
  • 347.
    Database Systems Handbook BY:MUHAMMAD SHARIF 347 Requirement Value System Architecture Processor: AMD64 and Intel EM64T Physical memory (RAM) 2 GB minimum Virtual memory (swap) If physical memory is between 2 GB and 16 GB, then set virtual memory to 1 times the size of the RAM If physical memory is more than 16 GB, then set virtual memory to 16 GB Disk space Typical Install Type total: 10 GB Advanced Install Types total: 10 GB Video adapter 256 colors Screen Resolution 1024 X 768 minimum Windows x64 Minimum Disk Space Requirements on NTFS Installation Type TEMP Space SYSTEM_DRIVE:Program FilesOracleInventory Oracle Home Data Files * Total Enterprise Edition 595 MB 4.55 MB 6.00 GB 4.38 GB ** 10.38 GB ** Standard Edition 2 595 MB 4.55 MB 5.50 GB 4.24 GB ** 9.74 GB ** * Refers to the contents of the admin, cfgtoollogs, flash_recovery_area, and oradata directories in the ORACLE_BASE directory. Memory Requirements for Installing Oracle Fusion Middleware Operating System Minimum Physical Memory Required Minimum Available Memory Required Linux 4 GB 8 GB UNIX 4 GB 8 GB Windows 4 GB 8 GB
  • 348.
    Database Systems Handbook BY:MUHAMMAD SHARIF 348 Calculations for No-Compression Databases To calculate database size when the compression option is none, use the formula: Number of blocks * (72 bytes + size of expanded data block) Calculations for Compressed Databases Because the compression method used can vary per block, the following calculation formulas are general estimates of the database size. Actual implementation could result in numbers larger or smaller than the calculations. 1. Bitmap Compression 2. Index-Value Compression 3. RLE Compression 4. zlib Compression 5. Index Files The minimum size for the index is 8,216,576 bytes (8 MB). To calculate the size of a database index, including all index files, perform the following calculation: number of existing blocks * 112 bytes = the size of database index About Calculating Database Limits Use the size guidelines in this section to calculate Oracle Database limits. Block Size Guidelines Type Size Maximum block size 16,384 bytes or 16 kilobytes (KB) Minimum block size 2 kilobytes (KB) Maximum blocks for each file 4,194,304 blocks Maximum possible file size with 16 K sized blocks 64 Gigabytes (GB) (4,194,304 * 16,384) = 64 gigabytes (GB) Maximum Number of Files for Each Database Block Size Number of Files 2 KB 20,000 4 KB 40,000 8 KB 65,536 16 KB 65,536
  • 349.
    Database Systems Handbook BY:MUHAMMAD SHARIF 349 Maximum File Sizes Type Size Maximum file size for a FAT file 4 GB Maximum file size in NTFS 16 Exabytes (EB) Maximum database size 65,536 * 64 GB equals approximately 4 Petabytes (PB) Maximum control file size 20,000 blocks Data Block Format Every data block has a format or internal structure that enables the database to track the data and free space in the block. This format is similar whether the data block contains table, index, or table cluster data. Oracle Database installation steps 12c before installation In this section, you will be installing the Oracle Database and creating an Oracle Home User account. Here OUI is used to install Oracle Software 1. Expand the database folder that you extracted in the previous section. Double-click setup. 2. Click Yes in the User Account Control window to continue with the installation.
  • 350.
    Database Systems Handbook BY:MUHAMMAD SHARIF 350 3. The Configure Security Updates window appears. Enter your email address and My Oracle Support password to receive security issue notifications via email. If you do not wish to receive notifications via email, deselect. Select "Skip software updates" if do not want to apply any updates. Accept the default and click Next. 4. The Select Installation Option window appears with the following options: Select "Create and configure a database" to install the database, create database instance and configure the database. Select "Install database software only" to only install the database software. Select "Upgrade an existing database" to upgrade the database that is already installed. In this OBE, we create and configure the database. Select the Create and configure a database option and click Next. 5. The System Class window appears. Select Desktop Class or Server Class depending on the type of system you are using. In this OBE, we will perform the installation on a desktop/laptop. Select Desktop class and click Next. 6. The Oracle Home User Selection window appears. Starting with Oracle Database 12c Release 1 (12.1), Oracle Database on Microsoft Windows supports the use of an Oracle Home User, specified at the time of installation. This Oracle Home User is used to run the Windows services for a Oracle Home, and is similar to the Oracle User on Oracle Database on Linux. This user is associated with an Oracle Home and cannot be changed to a different user post installation. Note: Different Oracle homes on a system can share the same Oracle Home User or use different Oracle Home Users. The Oracle Home User is different from an Oracle Installation User. The Oracle Installation User is the user who requires administrative privileges to install Oracle products. The Oracle Home User is used to run the Windows services for the Oracle Home. The window provides the following options: 1. If you select "Use Existing Windows User", the user credentials provided must be a standard Windows user account (not an administrator). 2. If this is a single instance database installation, the user can be a local user, a domain user, or a managed services account. 3. If this is an Oracle RAC database installation, the existing user must be a Windows domain user. The Oracle installer will display an error if this user has administrator privileges. 4. If you select "Create New Windows User", the Oracle installer will create a new standard Windows user account. This user will be assigned as the Oracle Home User. Please note that this user will not have login privileges. This option is not available for an Oracle RAC Database installation. 5. If you select "Use Windows Built-in Account", the system uses the Windows Built-in account as the Oracle Home User. Select the Create New Windows User option. Enter the user name as OracleHomeUser1 and password as Welcome1. Click Next. Note: Remember the Windows User password. It will be required later to administer or manage database services. 7. The Typical Install Configuration window appears. Click on a text field and then the balloon icon ( )to know more about the field. Note that by default, the installer creates a container database along with a pluggable database called "pdborcl". The pluggable database contains the sample HR schema. 8. Change the Global database name to orcl. Enter the “Administrative password” as Oracle_1. This password will be used later to log into administrator accounts such as SYS and SYSTEM. Click Next. 9. The prerequisite checks are performed and a Summary window appears. Review the settings and click Install.
  • 351.
    Database Systems Handbook BY:MUHAMMAD SHARIF 351 Note: Depending on your firewall settings, you may need to grant permissions to allow java to access the network. 10. The progress window appears. 11. The Database Configuration Assistant started and creates your the database. 12. After the Database Configuration Assistant creates the database, you can navigate to https://localhost:5500/em as a SYS user to manage the database using Enterprise Manager Database Express. You can click “Password Management…” to unlock accounts. Click OK to continue. 13. The Finish window appears. Click Close to exit the Oracle Universal Installer. 14. To verify the installation Navigate to C:Windowssystem32 using Windows Explorer. Double-click services. The Services window appears, displaying a list of services. Note: In advance installation step you allocate memory like
  • 352.
    Database Systems Handbook BY:MUHAMMAD SHARIF 352
  • 353.
    Database Systems Handbook BY:MUHAMMAD SHARIF 353 CHAPTER 18 ORACLE DATABASE APPLICATIONS DEVELOPMENT USING ORACLE APPLICATION EXPRESS Overview APEX, History, Apex architecture and Manage Utility The database manufacturer Oracle, is well-known for its relational database system “Oracle Database” which provides many efficient features to read and write large amounts of data. To cope with the growing demand of developing web applications very fast, Oracle has created the online development environment “Oracle APEX”. The creators of Oracle Application Express say it can help you develop enterprise apps up to 20 times faster and with 100 times less code There is no need to spend time on the GUI at the very beginning. Thus, the developer can directly start with implementing the business logic. This is the reason why Oracle APEX is feasible to create rapid GUI-Prototypes without logic. Thus, prospective customers can get an idea of how their future application will look. Oracle APEX – an extremely powerful tool As you can see, Oracle APEX is an extremely powerful tool that allows you to easily create simple-to-powerful apps, and gives you a lot of control over their functions and appearance. You have many different components available, like charts, different types of reports, mobile layouts, REST Web Services, faceted search, card regions, and many more. And the cool thing is, it’s going to get even better with time. Oracle’s roadmap for the technology is extensive and mentions things such as:
  • 354.
    Database Systems Handbook BY:MUHAMMAD SHARIF 354  Runtime application customization  More analytics  Machine Learning  Process modeling  Support for MySQL  Native map component (you’ll be able to create a map like those you saw in these Covid-19 apps I mentioned natively – right now you have to use additional tools for that, like JavaScript or a map plug-in).  Oracle JET-based components (JavaScript Extension Toolkit – it’s definitely not low-code, but it’s got nice data visualizations)  Expanded capabilities in APEX Service Cloud Console  REST Service Catalog (I had to google around for the one I used, but in the future, you’ll have a catalog of freely available options to choose)  Integration with developer lifecycle services  Improved printing and PDF export capabilities As you can see, there’s a lot of things that are worth waiting for. Oracle APEX is going to get a lot more powerful, and that’s even more of a reason to get to know it and start using it. Distinguishing Characteristics and Apex Data Sources
  • 355.
    Database Systems Handbook BY:MUHAMMAD SHARIF 355 Apex history APEX is a very powerful development tool, which is used to create web-based database-centric applications. The tool itself consists of a schema in the database with a lot of tables, views, and PL/SQL code. It’s available for every edition of the database. The techniques that are used with this tool are PL/SQL, HTML, CSS, and JavaScript. Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal and disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that generate the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this approach APEX is amazingly fast. APEX became available to the public in 2004 and then it was part of version 10g of the database. At that time it was called HTMLDB and the first version was 1.5. Before HTMLDB, it was called Oracle Flows, Oracle Platform, and Project Marvel.
  • 356.
    Database Systems Handbook BY:MUHAMMAD SHARIF 356 Note: Starting with Oracle Database 12c Release 2 (12.2), Oracle Application Express is included in the Oracle Home on disk and is no longer installed by default in the database. Oracle Application Express is included with the following Oracle Database releases: Oracle Database 19c – Oracle Application Express Release 18.1. Oracle Database 18c – Oracle Application Express Release 5.1. Oracle Database 12c Release 2 (12.2)- Oracle Application Express Release 5.0. Oracle Database 12c Release 1 (12.1) – Oracle Application Express Release 4.2. Oracle Database 11g Release 2 (11.2) – Oracle Application Express Release 3.2. Oracle Database 11g Release 1 (11.1) – Oracle Application Express Release 3.0. The Oracle Database releases less frequently than Oracle Application Express. Therefore, Oracle recommends updating to the latest Oracle Application Express release available on Oracle Technology Network. Within each application, you can also specify a Compatibility Mode in the Application Definition. The Compatibility Mode attribute controls the compatibility mode of the Application Express runtime engine. Compatibility Mode options include Pre 4.1, 4.1, 4.2, 5.0, 5.1/18.1, 18.2, 19.1, and 19.2. or upper versions. Most recent Oracle APEX releases Version 22 This release of Oracle APEX introduces Approvals and the Unified Task List, Simplified Create Page wizards, Readable Application Export formats, and Data Generator. APEX 22.1 also brings several enhancements existing components, such as tokenized row search, an easy way to sort regions, improvements to faceted search, additional customization of the PWA service worker, a more streamlined developer experience, and much more! Version 21 This release of Oracle APEX introduces Smart Filters, Progressive Web Apps, and REST Service Catalogs. APEX 21.2 also brings greater UI flexibility with Universal Theme, new and updated page components, numerous improvements to the developer experience, and a whole lot more! Especially now Oracle has pointed out APEX as one of the important tools for building applications in their Oracle Database Cloud Service, this interest will only grow. APEX shared a lot of the characteristics of cloud computing, even before cloud computing became popular. These characteristics include:  Elasticity  Browser-based development and runtime  RESTful web services (REST stands for Representational State Transfer)
  • 357.
    Database Systems Handbook BY:MUHAMMAD SHARIF 357
  • 358.
    Database Systems Handbook BY:MUHAMMAD SHARIF 358 Oracle Database architecture. Because the database is doing all the hard work, the architecture is fairly simple. We only have to add a web server. We can choose one of the following web servers:  Oracle HTTP Server (OHS)  Embedded PL/SQL Gateway (EPG)  APEX Listener Oracle APEX has a strong history, starting with version 1.5, which came out in 2004 – it was known as HTML DB then (before it also had other names, like Flows and Project Marvel). Oracle APEX is a part of the Oracle RAD architecture and technology stack. What does it mean? “R” stands for REST, or rather ORDS – Oracle REST Data Services. ORDS is responsible for asking the database for the page and rendering it back to the client; “A” stands for APEX, Oracle Application Express, the topic of this article; “D” stands for Database, which is the place an APEX application resides in.
  • 359.
    Database Systems Handbook BY:MUHAMMAD SHARIF 359 Other methodologies that work well with Oracle Application Express include: Spiral - This approach is actually a series of short waterfall cycles. Each waterfall cycle yields new requirements and enables the development team to create a robust series of prototypes. Rapid application development (RAD) life cycle - This approach has a heavy emphasis on creating a prototype that closely resembles the final product. The prototype is an essential part of the requirements phase. One disadvantage of this model is that the emphasis on creating the prototype can cause scope creep; developers can lose sight of their initial goals in the attempt to create the perfect application.
  • 360.
    Database Systems Handbook BY:MUHAMMAD SHARIF 360 These include Oauth client, APEX User, Database Schema User, and OS User. While it is important to ensure your ORDS web services are secured, you also need to consider what a client has access to once authenticate. As a quick
  • 361.
    Database Systems Handbook BY:MUHAMMAD SHARIF 361 reminder, authentication confirms your identity and allows you into the system, authorization decides what you can do once you are in. Oracle REST Data Services is a Java EE-based alternative for Oracle HTTP Server and mod_plsql. The Java EE implementation offers increased functionality including a command-line based configuration, enhanced security, file caching, and RESTful web services. Oracle REST Data Services also provides increased flexibility by supporting deployments using Oracle WebLogic Server, GlassFish Server, Apache Tomcat, and a standalone mode. The Oracle Application Express architecture requires some form of the webserver to proxy requests between a web browser and the Oracle Application Express engine. Oracle REST Data Services satisfies this need but its use goes beyond that of Oracle Application Express configurations. Oracle REST Data Services simplifies the deployment process because there is no Oracle home required, as connectivity is provided using an embedded JDBC driver. Oracle REST Data Services is a Java Enterprise Edition (Java EE) based data service that provides enhanced security, file caching features, and RESTful Web Services. Oracle REST Data Services also increases flexibility through support for deployment in standalone mode, as well as using servers like Oracle WebLogic Server and Apache Tomcat. ORDS ORDS, a Java-based application, enables developers with SQL and database skills to develop REST APIs for Oracle Database. You can deploy ORDS on web and application servers, including WebLogic®, Tomcat®, and Glassfish®, as shown in the following image: ORDS is our middle tier JAVA application that allows you to access your Oracle Database resources via REST APIs. Use standard HTTP(s) calls (GET|POST|PUT|DELETE) via URIs that ORDS makes available (/ords/database123/user3/module5/something/) ORDS will route your request to the appropriate database, and call the appropriate query or PL/SQL anonymous block), and return the output and HTTP codes.
  • 362.
    Database Systems Handbook BY:MUHAMMAD SHARIF 362 For most calls, that’s going to be the results of a SQL statement – paginated and formatted as JSON.
  • 363.
    Database Systems Handbook BY:MUHAMMAD SHARIF 363
  • 364.
    Database Systems Handbook BY:MUHAMMAD SHARIF 364 Oracle Cloud You can run APEX in an Autonomous Database (ADB) – an elastic database that you can scale up. It’s self-driving, self-healing, and can repair and upgrade itself. It comes in two flavours:
  • 365.
    Database Systems Handbook BY:MUHAMMAD SHARIF 365 1. Autonomous Transaction Processing (ATP) – basically transaction processing, it’s where APEX sees most use; 2. Autonomous Data Warehouse (ADW) – for more query-driven APEX applications. Reporting data is also a common use of Oracle APEX. You can also use the new Database Cloud Service (DCS) – an APEX-only solution. For a fee, you can have a commercial application running on a database cloud service. On-premise or Private Cloud You can also run Oracle APEX on-premise or in a Private Cloud – anywhere where a database runs. It can be a physical, dedicated server, a virtualized machine, a docker image (you can run it on your laptop, fire it up on a train or a plane – it’s very popular among Oracle Application Express developers). You can also use it on Exadata – a super-powerful APEX physical server on cloud services. Oracle Utility(Locking pages, apps, workspaces)
  • 366.
    Database Systems Handbook BY:MUHAMMAD SHARIF 366 Workspace utility Application Components Supporting objects
  • 367.
    Database Systems Handbook BY:MUHAMMAD SHARIF 367 Shared components object Utility components Remote development
  • 368.
    Database Systems Handbook BY:MUHAMMAD SHARIF 368
  • 369.
    Database Systems Handbook BY:MUHAMMAD SHARIF 369
  • 370.
    Database Systems Handbook BY:MUHAMMAD SHARIF 370 How to use APEX for free Autonomous Always Free – you can choose the Autonomous Always Free option, running either on ATP or AWS. It’s free for commercial use, but it doesn’t benefit from the scalability of the autonomous databases. Oracle Express Free Edition – you can also run a free version, which is called Oracle Express Free Edition, on- premise, but in this case, there’s a limit on how much data you can store there. Fan-made and official containers – there are also various fan-made and official containers with APEX installed available on the Internet. About Assigning Oracle Default Schemas to Workspaces In order for an Instance administrator to assign most Oracle default schemas to workspaces, a DBA must explicitly grant the privilege. When Oracle Application Express installs, the Instance administrator does not have the ability to assign Oracle default schemas to workspaces. Default schemas such as SYS, SYSTEM, and RMAN are reserved by Oracle for various product features and for internal use. Access to a default schema can be a very powerful privilege. For example, a workspace with access to the default schema SYSTEM can run applications that parse as the SYSTEM user. In order for an Instance administrator to have the ability to assign most Oracle default schemas to workspaces, the DBA must explicitly grant the privilege using SQL*Plus to run a procedure within the APEX_INSTANCE_ADMIN package. Granting the Privilege to Assign Oracle Default Schemas DBAs can grant an Instance administrator the ability to assign Oracle schemas to workspaces. A DBA grants an Instance administrator the ability to assign Oracle schemas to workspaces by using SQL*Plus to run the APEX_INSTANCE_ADMIN.UNRESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC APEX_INSTANCE_ADMIN.UNRESTRICT_SCHEMA(p_schema => ‘RMAN’); COMMIT; Revoking the Privilege to Assign Oracle Default Schemas DBAs can revoke the privilege to assign default schemas. A DBA revokes the privilege to assign default schemas using SQL*Plus to run the APEX_INSTANCE_ADMIN.RESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC APEX_180100.APEX_INSTANCE_ADMIN.RESTRICT_SCHEMA(p_schema => ‘RMAN’); COMMIT;
  • 371.
    Database Systems Handbook BY:MUHAMMAD SHARIF 371 This example would prevent the Instance administrator from assigning the RMAN schema to any workspace. It does not, however, prevent workspaces that have already had the RMAN schema assigned to them from using the RMAN schema. Granting the Privilege to Assign Oracle Default Schemas The DBA can grant an Oracle Application Express administrator the ability to assign Oracle default schemas to workspaces by using SQL*Plus to run the APEX_SITE_ADMIN_PRIVS.UNRESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC FLOWS_030100.APEX_SITE_ADMIN_PRIVS.UNRESTRICT_SCHEMA(p_schema => ‘SYSTEM’); COMMIT; This example would enable the Oracle Application Express administrator to assign the SYSTEM schema to any workspace. Revoking the Privilege to Assign Oracle Default Schemas The DBA can revoke this privilege using SQL*Plus to run the APEX_SITE_ADMIN_PRIVS.RESTRICT_SCHEMA procedure from within the Application Express engine schema. For example: EXEC FLOWS_030100.APEX_SITE_ADMIN_PRIVS.RESTRICT_SCHEMA(p_schema => ‘SYSTEM’); COMMIT; This example would display the text of a query that dumps the tables that defines the schema and workspace restrictions. SELECT a.schema “SCHEMA”,b.workspace_name “WORKSPACE” FROM WWV_FLOW_RESTRICTED_SCHEMAS a, WWV_FLOW_RSCHEMA_EXCEPTIONS b WHERE b.schema_id (+)= a.id;
  • 372.
    Database Systems Handbook BY:MUHAMMAD SHARIF 372 Oracle Application/workspace schema assignments Oracle APEX Oracle APEX_APPLICATION VIEWS Functionalities APEX_APPLICATIONS Applications defined in the current workspace or database user. APEX_WORKSPACES APEX_APPLICATION_ALL_AUTH All authorization schemes for all components by Application APEX_APPLICATIONS APEX_APPLICATION_AUTH Identifies the available Authentication Schemes defined for an Application APEX_APPLICATIONS APEX_APPLICATION_AUTHORIZATION Identifies Authorization Schemes which can be applied at the application, page or component level APEX_APPLICATIONS
  • 373.
    Database Systems Handbook BY:MUHAMMAD SHARIF 373 APEX_APPLICATION_BC_ENTRIES Identifies Breadcrumb Entries which map to a Page and identify a pages parent APEX_APPLICATIONS APEX_APPLICATION_BREADCRUMBS Identifies the definition of a collection of Breadcrumb Entries which are used to identify a page Hierarchy APEX_APPLICATIONS APEX_APPLICATION_BUILD_OPTIONS Identifies Build Options available to an application APEX_APPLICATIONS APEX_APPLICATION_CACHING Applications defined in the current workspace or database user. APEX_APPLICATIONS APEX_APPLICATION_COMPUTATIONS Identifies Application Computations which can run for every page or on login APEX_APPLICATIONS APEX_APPLICATION_GROUPS Application Groups defined per workspace. Applications can be associated with an application group. APEX_APPLICATIONS APEX_APPLICATION_ITEMS Identifies Application Items used to maintain session state that are not associated with a page APEX_APPLICATIONS APEX_APPLICATION_LISTS Identifies a named collection of Application List Entries which can be included on any page using a region of type List. Display attributes are controlled using a List Template. APEX_APPLICATIONS APEX_APPLICATION_LIST_ENTRIES Identifies the List Entries which define a List. List Entries can be hierarchical or flat. APEX_APPLICATION_LISTS APEX_APPLICATION_LOCKED_PAGES Locked pages of an application APEX_APPLICATIONS APEX_APPLICATION_LOVS Identifies a shared list of values that can be referenced by a Page Item or Report Column APEX_APPLICATIONS APEX_APPLICATION_LOV_COLS Identifies column metadata for a shared list of values. APEX_APPLICATION_LOVS
  • 374.
    Database Systems Handbook BY:MUHAMMAD SHARIF 374 APEX_APPLICATION_LOV_ENTRIES Identifies the List of Values Entries which comprise a shared List of Values APEX_APPLICATION_LOVS APEX_APPLICATION_NAV_BAR Identifies navigation bar entries displayed on pages that use a Page Template that include a #NAVIGATION_BAR# substitution string APEX_APPLICATIONS APEX_APPLICATION_PAGES A Page definition is the basic building block of page. Page components including regions, items, buttons, computations, branches, validations, and processes further define the definition of a page. APEX_APPLICATIONS APEX_APPLICATION_PAGE_BRANCHES Identifies branch processing associated with a page. A branch is a directive to navigate to a page or URL which is run at the conclusion of page accept processing. APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_BUTTONS Identifies buttons associated with a Page and Region APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_CHARTS Identifies a chart associated with a Page and Region. APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_CHART_A Identifies a chart axis associated with a chart on a Page and Region. APEX_APPLICATION_PAGE_CHARTS APEX_APPLICATION_PAGE_CHART_S Identifies a chart series associated with a chart on a Page and Region. APEX_APPLICATION_PAGE_CHARTS APEX_APPLICATION_PAGE_COMP Identifies the computation of Item Session State APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_DA Identifies Dynamic Actions associated with a Page APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_DA_ACTS Identifies the Actions of a Dynamic Action associated with a Page APEX_APPLICATION_PAGE_DA
  • 375.
    Database Systems Handbook BY:MUHAMMAD SHARIF 375 APEX_APPLICATION_PAGE_DB_ITEMS Identifies Page Items which are associated with Database Table Columns. This view represents a subset of the items in the APEX_APPLICATION_PAGE_ITEMS view. APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_GROUPS Identifies page groups APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_IR Identifies attributes of an interactive report APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_IR_CAT Report column category definitions for interactive report columns APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_IR_CGRPS Column group definitions for interactive report columns APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_IR_COL Report column definitions for interactive report columns APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_IR_COMP Identifies computations defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_COND Identifies filters and highlights defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_GRPBY Identifies group by view defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_PIVOT Identifies pivot view defined in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_PVAGG Identifies aggregates defined for a pivot view in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_PVSRT Identifies sorts defined for a pivot view in user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_IR_RPT Identifies user-level report settings for an interactive report APEX_APPLICATION_PAGE_IR
  • 376.
    Database Systems Handbook BY:MUHAMMAD SHARIF 376 APEX_APPLICATION_PAGE_IR_SUB Identifies subscriptions scheduled in saved reports for an interactive report APEX_APPLICATION_PAGE_IR_RPT APEX_APPLICATION_PAGE_ITEMS Identifies Page Items which are used to render HTML form content. Items automatically maintain session state which can be accessed using bind variables or substitution stings. APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_MAP Identifies the full breadcrumb path for each page with a breadcrumb entry APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_PROC Identifies SQL or PL/SQL processing associated with a page APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_REGIONS Identifies a content container associated with a Page and displayed within a position defined by the Page Template APEX_APPLICATION_PAGES APEX_APPLICATION_PAGE_REG_COLS Region column definitions used for regions APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_RPT Printing attributes for regions that are reports APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_RPT_COLS Report column definitions used for Classic Reports, Tabular Forms and Interactive Reports APEX_APPLICATION_PAGE_RPT APEX_APPLICATION_PAGE_TREES Identifies a tree control which can be referenced and displayed by creating a region with a source of this tree APEX_APPLICATION_PAGE_REGIONS APEX_APPLICATION_PAGE_VAL Identifies Validations associated with an Application Page APEX_APPLICATION_PAGES APEX_APPLICATION_PARENT_TABS Identifies a collection of tabs called a Tab Set. Each tab is part of a tab set and can be current for one or more pages. Each tab can also have a corresponding Parent Tab if two levels of Tabs are defined. APEX_APPLICATIONS
  • 377.
    Database Systems Handbook BY:MUHAMMAD SHARIF 377 APEX_APPLICATION_PROCESSES Identifies Application Processes which can run for every page, on login or upon demand APEX_APPLICATIONS APEX_APPLICATION_RPT_LAYOUTS Identifies report layout which can be referenced by report queries and classic reports APEX_APPLICATIONS APEX_APPLICATION_RPT_QRY_STMTS Identifies 377 ndividual SQL statements, which are part of a report quert APEX_APPLICATION_RPT_QUERIES APEX_APPLICATION_RPT_QUERIES Identifies report queries, which are printable documents that can be integrated with an application using buttons, list items, branches APEX_APPLICATIONS APEX_APPLICATION_SETTINGS Identifies application settings typically used by applications to manage configuration parameter names and values APEX_APPLICATIONS APEX_APPLICATION_SHORTCUTS Identifies Application Shortcuts which can be referenced “MY_SHORTCUT” syntax APEX_APPLICATIONS APEX_APPLICATION_STATIC_FILES Stores the files like CSS, images, javascript files, … of an application. APEX_APPLICATIONS APEX_APPLICATION_SUBSTITUTIONS Application level definitions of substitution strings. APEX_APPLICATIONS APEX_APPLICATION_SUPP_OBJECTS Identifies the Supporting Object installation messages APEX_APPLICATIONS APEX_APPLICATION_SUPP_OBJ_BOPT Identifies the Application Build Options that will be exposed to the Supporting Object installation APEX_APPLICATION_SUPP_OBJECTS APEX_APPLICATION_SUPP_OBJ_CHCK Identifies the Supporting Object pre-installation checks to ensure the database is compatible with the objects to be installed APEX_APPLICATION_SUPP_OBJECTS APEX_APPLICATION_SUPP_OBJ_SCR Identifies the Supporting Object installation SQL Scripts APEX_APPLICATION_SUPP_OBJECTS
  • 378.
    Database Systems Handbook BY:MUHAMMAD SHARIF 378 APEX_APPLICATION_TABS Identifies a set of tabs collected into tab sets which are associated with a Standard Tab Entry APEX_APPLICATIONS APEX_APPLICATION_TEMPLATES Identifies reference counts for templates of all types APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_BC Identifies the HTML template markup used to render a Breadcrumb APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_BUTTON Identifies the HTML template markup used to display a Button APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_CALENDAR Identifies the HTML template markup used to display a Calendar APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_LABEL Identifies a Page Item Label HTML template display attributes APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_LIST Identifies HTML template markup used to render a List with List Elements APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_PAGE The Page Template which identifies the HTML used to organized and render a page content APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_POPUPLOV Identifies the HTML template markup and some functionality of all Popup List of Values controls for this application APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_REGION Identifies a regions HTML template display attributes APEX_APPLICATION_THEMES APEX_APPLICATION_TEMP_REPORT Identifies the HTML template markup used to render a Report Headings and Rows APEX_APPLICATION_THEMES APEX_APPLICATION_THEMES Identifies a named collection of Templates APEX_APPLICATIONS APEX_APPLICATION_THEME_FILES Stores the files like CSS, images, javascript files, … of a theme. APEX_APPLICATION_THEMES
  • 379.
    Database Systems Handbook BY:MUHAMMAD SHARIF 379 APEX_APPLICATION_THEME_STYLES The Theme Style identifies the CSS file URLs which should be used for a theme APEX_APPLICATION_THEMES APEX_APPLICATION_TRANSLATIONS Identifies message primary language text and translated text APEX_APPLICATIONS APEX_APPLICATION_TRANS_DYNAMIC Application dynamic translations. These are created in the Translation section of Shared Components, and referenced at runtime via the function APEX_LANG.LANG. APEX_APPLICATIONS APEX_APPLICATION_TRANS_MAP Application Groups defined per workspace. Applications can be associated with an application group. APEX_APPLICATIONS APEX_APPLICATION_TRANS_REPOS Repository of translation strings. These are populated from the translation seeding process. APEX_APPLICATIONS APEX_APPLICATION_TREES Identifies a tree control which can be referenced and displayed by creating a region with a source of this tree APEX_APPLICATIONS APEX_APPLICATION_WEB_SERVICES Web Services referenceable from this Application APEX_APPLICATIONS APEX_APPL_ACL_ROLES Identifies Application Roles, which are workspace groups that are tied to a specific application APEX_APPLICATIONS APEX_APPL_ACL_USERS Identifies Application Users used to map application users to application roles APEX_APPLICATIONS APEX_APPL_ACL_USER_ROLES Identifies Application Users used to map application users to application roles APEX_APPL_ACL_USERS APEX_APPL_AUTOMATIONS Stores the meta data for automations of an application. APEX_APPLICATIONS APEX_APPL_AUTOMATION_ACTIONS Identifies actions associated with an automation APEX_APPLICATIONS
  • 380.
    Database Systems Handbook BY:MUHAMMAD SHARIF 380 APEX_APPL_CONCATENATED_FILES Concatenated files of a user interface APEX_APPLICATIONS APEX_APPL_DATA_LOADS Identifies Application Data Load definitions APEX_APPLICATIONS APEX_APPL_DATA_PROFILES Available Data Profiles used for parsing CSV, XLSX, JSON, XML and other data APEX_APPLICATIONS APEX_APPL_DATA_PROFILE_COLS Data Profile columns used for parsing JSON, XML and other data APEX_APPLICATIONS APEX_APPL_DEVELOPER_COMMENTS Developer comments of an application APEX_APPLICATIONS APEX_APPL_EMAIL_TEMPLATES Stores the meta data for the email templates of an application. APEX_APPLICATIONS APEX_APPL_LOAD_TABLES Identifies Application Legacy Data Loading definitions APEX_APPLICATIONS APEX_APPL_LOAD_TABLE_LOOKUPS Identifies a the collection of key lookups of the data loading tables APEX_APPLICATIONS APEX_APPL_LOAD_TABLE_RULES Identifies a collection of transformation rules that are to be used on the load tables. APEX_APPLICATIONS APEX_APPL_PAGE_CALENDARS Identifies Application Calendars APEX_APPLICATION_PAGES APEX_APPL_PAGE_CARDS Cards definitions APEX_APPLICATION_PAGE_REGIONS APEX_APPL_PAGE_CARD_ACTIONS Card actions definitions APEX_APPL_PAGE_CARDS Some prerequsites to install Oracle apex and ORDS are:
  • 381.
    Database Systems Handbook BY:MUHAMMAD SHARIF 381 Setting up Oracle REST Data Services requires two steps: 1. Configuration, which creates the configuration files needed to run ORDS. 2. Installation, which allows ORDS to run and be called from a front end web server: standalone / Jetty, WebLogic Server, Tomcat or Glassfish. This article presents how to install and configure Apex 21.2 with standalone ORDS 21.2 In previous versions an upgrade was required when a release affected the first two numbers of the version (4.2 to 5.0 or 5.1 to 18.1), but if the first two numbers of the version were not affected (5.1.3 to 5.1.4) you had to download and apply a patch, rather than do the full installation. This is no longer the case. Steps Setup (Download both software having equal version and paste unzip files at same location in directory) Installation Embedded PL/SQL Gateway (EPG) Configuration Oracle REST Data Services (ORDS) Configuration Oracle HTTP Server (OHS) Configuration
  • 382.
    Database Systems Handbook BY:MUHAMMAD SHARIF 382 Network ACLs Step One Create a new tablespace to act as the default tablespace for APEX. -- For Oracle Managed Files (OMF). CREATE TABLESPACE apex DATAFILE SIZE 100M AUTOEXTEND ON NEXT 1M; -- For non-OMF. CREATE TABLESPACE apex DATAFILE ‘/path/to/datafiles/apex01.dbf’ SIZE 100M AUTOEXTEND ON NEXT 1M; CREATE TABLESPACE lmtbsb DATAFILE ‘/u02/oracle/data/lmtbsb01.dbf’ SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; Create or alter database create tablespace alter data file command CREATE TABLESPACE lmtbsb DATAFILE ‘/u02/oracle/data/lmtbsb01.dbf’ SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M; which set the initial space for 10 tablespaces to around 1032Kb each.
  • 383.
    Database Systems Handbook BY:MUHAMMAD SHARIF 383
  • 384.
    Database Systems Handbook BY:MUHAMMAD SHARIF 384 Managing Space in Tablespaces Tablespaces allocate space in extents. Tablespaces can use two different methods to keep track of their free and used space:  Locally managed tablespaces: Extent management by the tablespace  Dictionary managed tablespaces: Extent management by the data dictionary When you create a tablespace, you choose one of these methods of space management. Later, you can change the management method with the DBMS_SPACE_ADMIN PL/SQL package. Step two Installation
  • 385.
    Database Systems Handbook BY:MUHAMMAD SHARIF 385 Change directory to the directory holding the unzipped APEX software. $ cd /home/oracle/apex In this directory there are 3 important files: apexins.sql – install apex in database apxchpwd.sql – change password for main apex user ADMIN apex_rest_config.sql – configures ords in database Step three IF: Connect to SQL*Plus as the SYS user and run the "apexins.sql" script, specifying the relevant tablespace names and image URL. SQL> CONN sys@pdb1 AS SYSDBA SQL> -- @apexins.sql tablespace_apex tablespace_files tablespace_temp images SQL> @apexins.sql APEX APEX TEMP /i/ Or Else Logon to database as SYSDBA and switch to pluggable database orclpdb1 and run installation script. You can install apex on dedicated tablespaces if required. sqlplus / as sysdba alter session set container=orclpdb1; @apexins.sql SYSAUX SYSAUX TEMP /i/ (Description of the command: @apexins.sql tablespace_apex tablespace_files tablespace_temp images tablespace_apex - name of the tablespace for APEX user. tablespace_files - name of the tablespace for APEX files user. tablespace_temp - name of the temporary tablespace. images - virtual directory for APEX images. Define the virtual image directory as /i/ for future updates.) Step four If you want to add the user silently, you could run the following code, specifying the required password and email. BEGIN APEX_UTIL.set_security_group_id( 10 ); APEX_UTIL.create_user(
  • 386.
    Database Systems Handbook BY:MUHAMMAD SHARIF 386 p_user_name => 'ADMIN', p_email_address => 'me@example.com', p_web_password => 'PutPasswordHere', p_developer_privs => 'ADMIN' ); APEX_UTIL.set_security_group_id( null ); COMMIT; END; / Note: Oracle Application Express is installed in the APEX_210200 schema. The structure of the link to the Application Express administration services is as follows: http://host:port/ords/apex_admin The structure of the link to the Application Express development interface is as follows: http://host:port/ords Or When Oracle Application Express installs, it creates three new database accounts all with status LOCKED in database: APEX_210200– The account that owns the Oracle Application Express schema and metadata. FLOWS_FILES – The account that owns the Oracle Application Express uploaded files. APEX_PUBLIC_USER – The minimally privileged account is used for Oracle Application Express configuration with ORDS. Create and change password for ADMIN account. When prompted enter a password for the ADMIN account. sqlplus / as sysdba alter session set container=orclpdb1; @apxchpwd.sql output SQL> @apxchpwd.sql This script can be used to change the password of an Application Express
  • 387.
    Database Systems Handbook BY:MUHAMMAD SHARIF 387 instance administrator. If the user does not yet exist, a user record will be created. Enter the administrator's username [ADMIN] User "ADMIN" does not yet exist and will be created. Enter ADMIN's email [ADMIN] Enter ADMIN's password [] Created instance administrator ADMIN. Step Five Create the APEX_LISTENER and APEX_REST_PUBLIC_USER users by running the "apex_rest_config.sql" script. SQL> CONN sys@pdb1 AS SYSDBA SQL> @apex_rest_config.sql Configure RESTful Services. When prompted enter a password for the APEX_LISTENER, APEX_REST_PUBLIC_USER account. sqlplus / as sysdba alter session set container=orclpdb1; @apex_rest_config.sql output SQL> @apex_rest_config.sql Enter a password for the APEX_LISTENER user [] Enter a password for the APEX_REST_PUBLIC_USER user [] ...set_appun.sql ...setting session environment ...create APEX_LISTENER and APEX_REST_PUBLIC_USER users ...grants for APEX_LISTENER and ORDS_METADATA user as last step you can modify again passwords for 3 users: ALTER USER apex_public_user IDENTIFIED BY Dbaora$ ACCOUNT UNLOCK; ALTER USER apex_listener IDENTIFIED BY Dbaora$ ACCOUNT UNLOCK; ALTER USER apex_rest_public_user IDENTIFIED BY Dbaora$ ACCOUNT UNLOCK; Install and configure You can install and configure APEX and ORDS by using the following methods:
  • 388.
    Database Systems Handbook BY:MUHAMMAD SHARIF 388  Install APEX and ORDS and configure ORDS.  Install APEX and configure a web listener: embedded PL/SQL gateway.  Install APEX and configure the legacy web listener: Oracle HTTP Server. For this post, I chose the first option, which Oracle recommends: Install APEX and ORDS and configure ORDS. Step Six Now you need to decide which gateway to use to access APEX. The Oracle recommendation is ORDS. Note: Oracle REST Data Services (ORDS), formerly known as the APEX Listener, allows APEX applications to be deployed without the use of Oracle HTTP Server (OHS) and mod_plsql or the Embedded PL/SQL Gateway. ORDS version 3.0 onward also includes JSON API support to work in conjunction with the JSON support in the database. ORDS can be deployed on WebLogic, Tomcat or run in standalone mode. This article describes the installation of ORDS on Tomcat 8 and 9. For Lone-PDB installations (a CDB with one PDB), or for CDBs with small numbers of PDBs, ORDS can be installed directly into the PDB. If you are using many PDBs per CDB, you may prefer to install ORDS into the CDB to allow all PDBs to share the same connection pool. Create directory /home/oracle/ords for ords software and unzip it mkdir /home/oracle/ords cp ords-21.4.2.062.1806.zip /home/oracle/ords cd /home/oracle/ords unzip ords-21.4.2.062.1806.zip Create configuration directory /home/oracle/ords/conf for ords standalone mkdir /home/oracle/ords/conf Run ords first time you are asked for: directory to save configuration: /home/oracle/ords/conf password for ORDS_PUBLIC_USER(be created): Dbaora$ administrator user: SYS password for SYS AS SYSDBA: !!! you must know it from your DBA !!! use PL/SQL Gateway or not: 1 for yes password for APEX_PUBLIC_USER: Dbaora$ password for APEX_LISTENER: Dbaora$ feature to enable: 1 for SQL Developer Web (Enables all features) wish to start in standalone mode: 1 for standalone mode
  • 389.
    Database Systems Handbook BY:MUHAMMAD SHARIF 389 [oracle@oel8 ords]$ java -jar ords.war This Oracle REST Data Services instance has not yet been configured. Please complete the following prompts Enter the location to store configuration data: /home/oracle/ords/conf Enter the database password for ORDS_PUBLIC_USER: Confirm password: Requires to login with administrator privileges to verify Oracle REST Data Services schema. Enter the administrator username:sys Enter the database password for SYS AS SYSDBA: Confirm password: Connecting to database user: SYS AS SYSDBA url: jdbc:oracle:thin:@//oel8.dbaora.com:1521/orclpdb1 Retrieving information. Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step. If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]: Enter the database password for APEX_PUBLIC_USER: Confirm password: Enter the database password for APEX_LISTENER: Confirm password: Enter the database password for APEX_REST_PUBLIC_USER: Confirm password: Enter a number to select a feature to enable: [1] SQL Developer Web (Enables all features) [2] REST Enabled SQL [3] Database API [4] REST Enabled SQL and Database API [5] None Choose [1]:1 2022-03-19T18:40:34.543Z INFO reloaded pools: [] Installing Oracle REST Data Services version 21.4.2.r0621806 ... Log file written to /home/oracle/ords_install_core_2022-03-19_194034_00664.log ... Verified database prerequisites
  • 390.
    Database Systems Handbook BY:MUHAMMAD SHARIF 390 ... Created Oracle REST Data Services proxy user ... Created Oracle REST Data Services schema ... Granted privileges to Oracle REST Data Services ... Created Oracle REST Data Services database objects ... Log file written to /home/oracle/ords_install_datamodel_2022-03-19_194044_00387.log ... Log file written to /home/oracle/ords_install_scheduler_2022-03-19_194045_00075.log ... Log file written to /home/oracle/ords_install_apex_2022-03-19_194046_00484.log Completed installation for Oracle REST Data Services version 21.4.2.r0621806. Elapsed time: 00:00:12.611 Enter 1 if you wish to start in standalone mode or 2 to exit [1]:1 Enter 1 if using HTTP or 2 if using HTTPS [1]: Choose [1]:1 As a result ORDS will be running in standalone mode and configured so you can try to logon to apex. After reboot of machine start ORDS in standalone mode in background as following: cd /home/oracle/ords java -jar ords.war standalone & Verify APEX is working Administration page http://hostname:port/ords In this case http://oel8.dbaora.com:8080/ords OR Embedded PL/SQL Gateway (EPG) Configuration If you want to use the Embedded PL/SQL Gateway (EPG) to front APEX, you can follow the instructions here. This is used for both the first installation and upgrades. Run the "apex_epg_config.sql" script, passing in the base directory of the installation software as a parameter. SQL> CONN sys@pdb1 AS SYSDBA SQL> @apex_epg_config.sql /home/oracle OR Oracle HTTP Server (OHS) Configuration If you want to use Oracle HTTP Server (OHS) to front APEX, you can follow the instructions here.
  • 391.
    Database Systems Handbook BY:MUHAMMAD SHARIF 391 Change the password and unlock the APEX_PUBLIC_USER account. This will be used for any Database Access Descriptors (DADs). SQL> ALTER USER APEX_PUBLIC_USER IDENTIFIED BY myPassword ACCOUNT UNLOCK; Step Seven Unlock the ANONYMOUS account. SQL> CONN sys@cdb1 AS SYSDBA DECLARE l_passwd VARCHAR2(40); BEGIN l_passwd := DBMS_RANDOM.string('a',10) || DBMS_RANDOM.string('x',10) || '1#'; -- Remove CONTAINER=ALL for non-CDB environments. EXECUTE IMMEDIATE 'ALTER USER anonymous IDENTIFIED BY ' || l_passwd || ' ACCOUNT UNLOCK CONTAINER=ALL'; END; / Check the port setting for XML DB Protocol Server. SQL> CONN sys@pdb1 AS SYSDBA SQL> SELECT DBMS_XDB.gethttpport FROM DUAL; GETHTTPPORT ----------- 0 1 row selected. SQL> If it is set to "0", you will need to set it to a non-zero value to enable it. SQL> CONN sys@pdb1 AS SYSDBA SQL> EXEC DBMS_XDB.sethttpport(8080); Now you apex is available at ulr:8080/apex/ Recovery or uninstallation of ORDS Starting/Stopping ORDS Under Tomcat
  • 392.
    Database Systems Handbook BY:MUHAMMAD SHARIF 392 ORDS is started or stopped by starting or stopping the Tomcat instance it is deployed to. Assuming you have the CATALINA_HOME environment variable set correctly, the following commands should be used. Oracle now supports Oracle REST Data Services (ORDS) running in standalone mode using the built-in Jetty web server, so you no longer need to worry about installing WebLogic, Glassfish or Tomcat unless you have a compelling reason to do so. Removing this extra layer means one less layer to learn and one less layer to patch. ORDS can run as a standalone app with a built in webserver. This is perfect for local development purposes but in the real world you will want a decent java application server (Tomcat, Glassfish or Weblogic) with a webserver in front of it (Apache or Nginx). export CATALINA_OPTS="$CATALINA_OPTS -Duser.timezone=UTC" $ $CATALINA_HOME/bin/startup.sh $ $CATALINA_HOME/bin/shutdown.sh ORDS Validate You can validate/fix the current ORDS installation using the validate option. $ $JAVA_HOME/bin/java -jar ords.war validate Enter the name of the database server [ol7-122.localdomain]: Enter the database listen port [1521]: Enter the database service name [pdb1]: Requires SYS AS SYSDBA to verify Oracle REST Data Services schema. Enter the database password for SYS AS SYSDBA: Confirm password: Retrieving information. Oracle REST Data Services will be validated. Validating Oracle REST Data Services schema version 18.2.0.r1831332 ... Log file written to /u01/asi_test/ords/logs/ords_validate_core_2018-08-07_160549_00215.log Completed validating Oracle REST Data Services version 18.2.0.r1831332. Elapsed time: 00:00:06.898 $ Manual ORDS Uninstall In recent versions you can use the following command to uninstall ORDS and provide the information when prompted. # su - tomcat $ cd /u01/ords
  • 393.
    Database Systems Handbook BY:MUHAMMAD SHARIF 393 $ $JAVA_HOME/bin/java -jar ords.war uninstall Enter the name of the database server [ol7-122.localdomain]: Enter the database listen port [1521]: Enter 1 to specify the database service name, or 2 to specify the database SID [1]: Enter the database service name [pdb1]: Requires SYS AS SYSDBA to verify Oracle REST Data Services schema. Enter the database password for SYS AS SYSDBA: Confirm password: Retrieving information Uninstalling Oracle REST Data Services ... Log file written to /u01/ords/logs/ords_uninstall_core_2018-06-14_155123_00142.log Completed uninstall for Oracle REST Data Services. Elapsed time: 00:00:10.876 $ In older versions of ORDS you had to extract scripts to perform the uninstall in the following way. su - tomcat cd /u01/ords $JAVA_HOME/bin/java -jar ords.war ords-scripts --scriptdir /tmp Perform the uninstall from the "oracle" user using the following commands. su -oracle cd /tmp/scripts/uninstall/core/ sqlplus sys@pdb1 as sysdba @ords_manual_uninstall /tmp/scripts/logs What is an APEX Workspace? An APEX Workspace is a logical domain where you define APEX applications. Each workspace is associated with one or more database schemas (database users) which are used to store the database objects, such as tables, views, packages, and more. APEX applications are built on top of these database objects.
  • 394.
    Database Systems Handbook BY:MUHAMMAD SHARIF 394 What is a Workspace Administrator? Workspace administrators have all the rights and privileges available to developer and manage administrator tasks specific to a workspace. In Oracle Application Express, users sign in to a shared work area called a workspace. A workspace enables multiple users to work within the same Oracle Application Express installation while keeping their objects, data and applications private. This flexible architecture enables a single database instance to manage thousands of applications.
  • 395.
    Database Systems Handbook BY:MUHAMMAD SHARIF 395 Within a workspace, End users can only run existing database or Websheet application. Developers can create and edit applications, monitor workspace activity, and view dashboards. Oracle Application Express includes two administrator roles: 1. Workspace administrators are users who perform administrator tasks specific to a workspace. 2. Instance administrators are superusers that manage an entire hosted Oracle Application Express instance which may contain multiple workspaces. Workspace administrators can reset passwords, view product and environment information, manage the Export repository, manage saved interactive reports, view the workspace summary report, and manage Websheet database objects. Additionally, workspace administrators manage service requests, configure workspace preferences, manage user accounts, monitor workspace activity, and view log files. Apex Development Models and RAD development One might think that since APEX is a development framework, there is no need for methodology. After all, it is a Rapid Application Development (RAD) tool. When developing applications using Application Builder, you must find a balance between two dramatically different development methodologies:
  • 396.
    Database Systems Handbook BY:MUHAMMAD SHARIF 396 Iterative, rapid application development or Planned, linear style development The first approach offers so much flexibility that you run the risk of never completing your project. In contrast, the second approach can yield applications that do not meet the needs of end users even if they meet the stated requirements on paper. Oracle APEX is a full spectrum technology. It can be used by so-called citizen developers, who can use the wizard to create some simple applications to get going. However, these people can team up with a technical developer to create a more complex application together, and in such a case it also goes full spectrum – code by code, line by line, back-end development, front-end development, database development. If you get a perfect mix of front-end and back-end developers, then you can create a truly great APEX application. System Development Life Cycle Methodologies to Consider The system development life cycle (SDLC) is the overall process of developing software using a series of defined steps. There are several SDLC models that work well for developing applications in Oracle Application Express. Our methodology is composed of different elements related to all aspects of an APEX development project.
  • 397.
    Database Systems Handbook BY:MUHAMMAD SHARIF 397 This methodology is referred to as a waterfall because the output from one stage is the input for the next stage. A primary problem with this approach is that it is assumed that all requirements can be established in advance. Unfortunately, requirements often change and evolve during the development process. The Oracle Application Express development environment enables developers to take a more iterative approach to development. Unlike many other development environments, creating prototypes is easy. With Oracle Application Express, developers can: Use built-in wizards to quickly design an application user interface Make prototypes available to users and gather feedback Implement changes in real time, creating new prototypes instantly So how do i create such an app? I sign in to the APEX workspace, click the Create button, and choose the New application option. I called my app “Warsaw Air Quality Log”. For features, I select an About Page, Configuration Options, Activity Reporting, and Theme Style Selection. I leave the rest of the fields blank for now and instead, I just click Create Application. As you’ll see when you check it out for yourselves, creating a basic app is very quick. Of course, I could’ve added more pages there, ticked more options – but that’s what we need for now. Apex Development
  • 398.
    Database Systems Handbook BY:MUHAMMAD SHARIF 398 Deployment options to consider include: Use the same workspace and same schema. Export and then import the application and install it using a different application ID. This approach works well when there are few changes to the underlying objects, but frequent changes to the application functionality. Use a different workspace and same schema. Export and then import the application into a different workspace. This is an effective way to prevent a production application from being modified by developers. Use a different workspace and different schema. Export and then import the application into a different workspace and install it so that it uses a different schema. This new schema needs to have the database objects required by your application. See "Using the Database Object Dependencies Report". Use a different database with all its variations. Export and then import the application into a different Oracle Application Express instance and install it using a different workspace, schema, and database.
  • 399.
    Database Systems Handbook BY:MUHAMMAD SHARIF 399 Migration of Applications Migration of oracle forms to Apex forms After converting your forms files into XML files, sign into your APEX workspace and be sure you're using the schema that contains all database objects needed in the forms. Now, create a Migration Project and upload the XML files, following these steps: 1. Click App Builder. 2. Navigate to the right panel, click Oracle Forms Migrations. 3. Click Create Project. 4. Enter Project Name and Description. 5. Select the schema. 6. Upload the XML file. 7. Click Next. 8. Click Upload Another File if you have more XML files, otherwise click Create. Now let's review each component in the upload forms to determine proper regions to use in the APEX Application. Also, let's review the Triggers and Program Units in order to identify the business logic in your Forms Application and determine if it will need to be replicated or not. Oracle Forms applications still play a vital role, but many are looking for ways to modernize their applications. Modernize your Oracle Forms applications by migrating them to Oracle Application Express (Oracle APEX) in the cloud. Your stored procedures and PL/SQL packages work natively in Oracle APEX, making it the clear platform of choice for easily transitioning Oracle Forms applications to modern web applications with more capabilities, less complexity, and lower development and maintenance costs.
  • 400.
    Database Systems Handbook BY:MUHAMMAD SHARIF 400 Oracle APEX is a low-code development platform that enables you to build scalable, secure enterprise apps, with world-class features, that you can deploy anywhere. You can quickly develop and deploy compelling apps that solve real problems and provide immediate value. You won't need to be an expert in a vast array of technologies to deliver sophisticated solutions. Architecture This architecture shows the process of migrating on-premises Oracle Forms applications to Oracle Application Express (APEX) applications with the help of an XML converter, and then moving them to the cloud.The following diagram illustrates this reference architecture. Recommendations for migration of database application Use the following recommendations as a starting point to plan your migration to Oracle Application Express.Your requirements might differ from the architecture described here. VCN When you create a VCN, determine how many IP addresses your cloud resources in each subnet require. Using Classless Inter-Domain Routing (CIDR) notation, specify a subnet mask and a network address range large enough for the required IP addresses. Use CIDR blocks that are within the standard private IP address space. After you create a VCN, you can change, add, and remove its CIDR blocks. When you design the subnets, consider functionality and security requirements. All compute instances within the same tier or role should go into the same subnet.
  • 401.
    Database Systems Handbook BY:MUHAMMAD SHARIF 401 Use regional subnets. Security lists Use security lists to define ingress and egress rules that apply to the entire subnet. Cloud Guard Clone and customize the default recipes provided by Oracle to create custom detector and responder recipes. These recipes enable you to specify what type of security violations generate a warning and what actions are allowed to be performed on them. For example, you might want to detect Object Storage buckets that have visibility set to public. Apply Cloud Guard at the tenancy level to cover the broadest scope and to reduce the administrative burden of maintaining multiple configurations. You can also use the Managed List feature to apply certain configurations to detectors. Security Zones For resources that require maximum security, Oracle recommends that you use security zones. A security zone is a compartment associated with an Oracle-defined recipe of security policies that are based on best practices. For example, the resources in a security zone must not be accessible from the public internet and they must be encrypted using customer-managed keys. When you create and update resources in a security zone, Oracle Cloud Infrastructure validates the operations against the policies in the security-zone recipe, and denies operations that violate any of the policies. Schema Retain the database structure that Oracle Forms was built on, as is, and use that as the schema for Oracle APEX. Business Logic Most of the business logic for Oracle Forms is in triggers, program units, and events. Before starting the migration of Oracle Forms to Oracle APEX, migrate the business logic to stored procedures, functions, and packages in the database. Considerations Consider the following key items when migrating Oracle Forms Object navigator components to Oracle Application Express (APEX): Data Blocks A data block from Oracle Forms relates to Oracle APEX with each page broken up into several regions and components. Review the Oracle APEX Component Templates available in the Universal Theme. Triggers In Oracle Forms, triggers control almost everything. In Oracle APEX, control is based on flexible conditions that are activated when a page is submitted and are managed by validations, computations, dynamic actions, and processes. Alerts Most messages in Oracle APEX are generated when you submit a page.
  • 402.
    Database Systems Handbook BY:MUHAMMAD SHARIF 402 Attached Libraries Oracle APEX takes care of the JavaScript and CSS libraries that support the Universal Theme, which supports all of the components that you need for flexible, dynamic applications. You can include your own JavaScript and CSS in several ways, mostly through page attributes. You can choose to add inline code as reference files that exist either in the database as a BLOB (#APP_IMAGES#) or sit on the middle tier, typically served by Oracle REST Data Services (ORDS). When a reference file is on an Oracle WebLogic Server, the file location is prefixed with #IMAGE_PREFIX#. Editors Oracle APEX has a text area and a rich text editor, which is equivalent to Editors in Oracle Forms. List of Values (LOV) In APEX, the LOV is coupled with the Item type. A radio group works well with a small handful of values. Select Lists for middle-sized sets, and select Popup LOV for large data sets. You can use the queries from Record Group in Oracle Forms for the LOV query in Oracle APEX. LOV's in Oracle APEX can be dynamically driven by a SQL query, or be statically defined. A static definition allows a variety of conditions to be applied to each entry. These LOVs can then be associated with Items such as Radio Groups & Select Lists, or with a column in a report, to translate a code to a label. Parameters Page Items in Oracle APEX are populated between pages to pass information to the next page, such as the selected record in a report. Larger forms with a number of items are generally submitted as a whole, where the page process handles the data, and branches to the next page. These values can be protected from URL tampering by session state security, at item, page, and application levels, often by default. Popup Menus Popup Menus are not available out of the box in Oracle APEX, but you can build them by using Lists and associating a button with the menu. Program Units Migrate the Stored procedures and functions defined in program units in Oracle Forms into Database Stored Procedures/Functions and use Database Stored procedures/functions in Oracle APEX processes/validations/computations. Property Classes Property Classes in Oracle Forms allow the developer to utilize common attributes among each instance of a component. In APEX you can define User Interface Defaults in the data dictionary, so that each time items or reports are created for specific tables or columns, the same features are applied by default. As for the style of the application, you can apply classes to components that carry a particular look and feel. The Universal Theme has a default skin that you can reconfigure declaratively. Record Groups Use queries in Record Groups to define the Dynamic LOV in Oracle APEX.
  • 403.
    Database Systems Handbook BY:MUHAMMAD SHARIF 403 Reports Interactive Reports in Oracle APEX come with a number of runtime manipulation options that give users the power to customize and manipulate the reports. Classic Reports are simple reports that don't provide runtime manipulation options, but are based on SQL. Menus Oracle Forms have specific menu files, controlled by database roles. Updating the .mmx file required that there be no active users. The menu in Oracle APEX can either be across the top, or down the left side. These menus can be statically defined, or dynamically driven. Static navigation entries can be controlled by authorization schemes, or custom conditions. Dynamic menus can have security tables integrated within the SQL. Properties The Page Designer introduced in Oracle APEX is similar to Oracle Forms, particularly with regard to the ability to edit multiple components at once, only intersecting attributes. Apex Manage Logs and Files and recovery Page View Activity Logs track user activity for an application. The Application Express engine uses two logs to track user activity. At any given time, one log is designated as current. For each rendered page view, the Application Express engine inserts one row into the log file. A log switch occurs at the interval listed on the Page View Activity Logs page. At that point, the Application Express engine removes all entries in the noncurrent log and designates it as current. SQL Workshop Logs Delete SQL Workshop log entries. The SQL Workshop maintains a history of SQL statements run in the SQL Commands. Workspace Activity Reports logs Workspace administrators are users who perform administrator tasks specific to a workspace and have the access to various types of activity reports. Instance Activity Reports Instance administrators are superusers that manage an entire hosted instance using the Application Express Administration Services application.
  • 404.
    Database Systems Handbook BY:MUHAMMAD SHARIF 404 RMAN Backup/Restore If lost the APEX tablespace but your database is currently functioning. If this is the case, and assuming your APEX tablespace does not span multiple datafiles, you can attempt to swap out the datafile. Please force a backup in rman before trying any of this. There are a few different options here. All you really need are the following  Datafile  Control file Archive / redologs (if you want to move forward or backward in time) Then run rman target / from bash terminal. In rman run the following. RESTORE CONTROLFILE FROM '/tmp/oradata/your_ctrl_file_dir' ALTER TABLESPACE apex OFFLINE IMMEDIATE'; SET NEWNAME FOR DATAFILE '/tmp/oradata/apex01.dbf' TO RESTORE TABLESPACE apex; SWITCH DATAFILE ALL; RECOVER TABLESPACE apex;
  • 405.
    Database Systems Handbook BY:MUHAMMAD SHARIF 405 Swap out Datafile First find the location of your datafiles. You can find them by running the following in sqlplus / as sysdba or whatever client you use spool '/tmp/spool.out' select value from v$parameter where name = 'db_create_file_dest'; select tablespace name from dba_data_files; View the spool.out file and Verify the location of your datafiles See if the datafile still is associated with that tablespace. If the tablespace is still there run select file_name, status from dba_data_files WHERE tablespace name = < name > You want your your datafile to be available. Then you want to set the tablespace to read only and take it offline alter tablespace < name > read only; alter tablespace < name > offline; Now copy your dbf file the directory returned from querying db_create_file_dest value. Don't overwrite the old one, then run. alter tablespace < name > rename datafile '/u03/waterver/oradata/yourold.dbf' to '/u03/waterver/oradata/yournew.dbf' This updates your controlfile to point to the new datafile. You can then bring your tablespace back online and back in read write mode. You may also want to verify the status of the tablespace status, the name of the datafile associated with that tablespace, etc. APEX version requirements The APEX option uses storage on the DB instance class for your DB instance. Following are the supported versions and approximate storage requirements for Oracle APEX.
  • 406.
    Database Systems Handbook BY:MUHAMMAD SHARIF 406 APEX version Storage requirements Supported Oracle Database versions Notes Oracle APEX version 21.1.v1 125 MiB All This version includes patch 32598392: PSE BUNDLE FOR APEX 21.1, PATCH_VERSION 3. Oracle APEX version 20.2.v1 148 MiB All except 21c This version includes patch 32006852: PSE BUNDLE FOR APEX 20.2, PATCH_VERSION 2020.11.12. You can see the patch number and date by running the following query: SELECT PATCH_VERSION, PATCH_NUMBER FROM APEX_PATCHES; Oracle APEX version 20.1.v1 173 MiB All except 21c This version includes patch 30990551: PSE BUNDLE FOR APEX 20.1, PATCH_VERSION 2020.07.15. Oracle APEX version 19.2.v1 149 MiB All except 21c Oracle APEX version 19.1.v1 148 MiB All except 21c Oracle APEX version 18.2.v1 146 MiB 12.1 and 12.2 only Oracle apex authentication and authorizations In addition to authentication and authorization, Oracle has provided an additional functionality called Oracle VPD. VPD stands for “Virtual Private Database” and offers the possibility to implement multi-client capability into APEX web applications. With Oracle VPD and PL/SQL special columns of tables can be declared as conditions to separate data between different clients. An active Oracle VPD automatically adds an SQL WHERE clause to an SQL SELECT statement. This WHERE clause contains the declared columns and thus delivers only data sets that match (row level security).
  • 407.
    Database Systems Handbook BY:MUHAMMAD SHARIF 407 Authentication schemes support in Oracle APEX.  Application Express Accounts Application Express Accounts are user accounts that are created within and managed in the Oracle Application Express user repository. When you use this method, your application is authenticated against these accounts.  Custom Authentication Creating a Custom Authentication scheme from scratch to have complete control over your authentication interface.  Database Accounts Database Account Credentials authentication utilizes database schema accounts to authenticate users.  HTTP Header Variable Authenticate users externally by storing the username in a HTTP Header variable set by the web server.  Open Door Credentials Enable anyone to access your application using a built-in login page that captures a user name.  No Authentication (using DAD) Adopts the current database user. This approach can be used in combination with a mod_plsql Database Access Descriptor (DAD) configuration that uses basic authentication to set the database session user.  LDAP Directory Authenticate a user and password with an authentication request to a LDAP server.  Oracle Application Server Single Sign-On Server Delegates authentication to the Oracle AS Single Sign-On (SSO) Server. To use this authentication scheme, your site must have been registered as a partner application with the SSO server.  SAML Sign-In Delegates authentication to the Security Assertion Markup Language (SAML) Sign In authentication scheme.  Social Sign-In Social Sign-In supports authentication with Google, Facebook, and other social network that supports OpenID Connect or OAuth2 standards. Table Authorization Scheme Types When you create an authorization scheme you select an authorization scheme type. The authorization scheme type determines how an authorization scheme is applied. Developers can create new authorization type plug-ins to extend this list.
  • 408.
    Database Systems Handbook BY:MUHAMMAD SHARIF 408 Authorization Scheme Types Description Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns at least one row and causes the scheme to fail if it returns no rows NOT Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns no rows and causes the scheme to fail if it returns one or more rows PL/SQL Function Returning Boolean Enter a function body. If the function returns true, the authorization succeeds. Item in Expression 1 is NULL Enter an item name. If the item is null, the authorization succeeds. Item in Expression1 is NOT NULL Enter an item name. If the item is not null, the authorization succeeds. Value of Item in Expression 1 Equals Expression 2 Enter and item name and value.The authorization succeeds if the item's value equals the authorization value. Value of Item in Expression 1 Does NOT Equal Expression 2 Enter an item name and a value. The authorization succeeds if the item's value is not equal to the authorization value. Value of Preference in Expression 1 Does NOT Equal Expression 2 Enter an preference name and a value. The authorization succeeds if the preference's value is not equal to the authorization value. Value of Preference in Expression 1 Equals Expression 2 Enter an preference name and a value. The authorization succeeds if the preference's value equal the authorization value. Is In Group Enter a group name. The authorization succeeds if the group is enabled as a dynamic group for the session. If the application uses Application Express Accounts Authentication, this check also includes workspace groups that are granted to the user. If the application uses Database Authentication, this check also includes database roles that are granted to the user. Is Not In Group Enter a group name. The authorization succeeds if the group is not enabled as a dynamic group for the session. Upgrades of Apex Software The basic steps for upgrading APEX are: Run the APEX installation script against the target database. The same script is used for new installations and upgrades. The script automatically senses whether there is a version of APEX present and automatically takes the appropriate action.
  • 409.
    Database Systems Handbook BY:MUHAMMAD SHARIF 409 Update the existing version of the /i/ virtual directory with the images, javascript, css, etc. with the current versions APEX installation medium. For the standard HTTP Server installations, this is just a simple copy command. For the Embedded PL/SQL Gateway (EPG), the script apxldimg.sql is used to load the images into the database. For the APEX Listener / Oracle REST Data Services (ORDS), recreate the i.jar file that contains the references to the images, javascript, css, etc. from the APEX installation media OR copy the new versions of the files to the existing location referenced by the current APEX Listener / ORDS / web server. Check Your Version Prior to the Application Express (APEX) upgrade, begin by identifying the version of the APEX currently installed and the database prerequisites. To do this run the following query in SQLPLUS as SYS or SYSTEM: Where <SCHEMA> represents the current version of APEX and is one of the following:  For APEX (HTML DB) versions 1.5 - 3.1, the schema name is: FLOWS_XXXXXX. For example: FLOWS_010500 for HTML DB version 1.5.x  For APEX (HTML DB) versions 3.2.x and above, the schema name is: APEX_XXXXXX. For example: APEX_210100 for APEX version 21.1.  If the query returns 0, it is a runtime only installation, and apxrtins.sql should be used for the upgrade. If the query returns 1, this is a development install and apexins.sql should be used
  • 410.
    Database Systems Handbook BY:MUHAMMAD SHARIF 410 The full download is needed if the first two digits of the APEX version are different. For example, the full Application Express download is needed to go from 20.0 to 21.1. See <Note 752705.1> ORA-1435: User Does not Exist" When Upgrading APEX Using apxpatch.sql: for more information. The patch is needed if only the third digit of the version changes. So when upgrading from from 21.1.0 patch to upgrade to 21.1.2.
  • 411.
    Database Systems Handbook BY:MUHAMMAD SHARIF 411 END
  • 412.
    Database Systems Handbook BY:MUHAMMAD SHARIF 412 CHAPTER 19 ORACLE WEBLOGIC SERVERS AND ITS CONFIGURATIONS What is Oracle WebLogic Server? Oracle WebLogic Server is a leading e-commerce online transaction processing (OLTP) platform, developed to connect users in distributed computing production environments and to facilitate the integration of mainframe applications with distributed corporate data and applications. History of WebLogic WebLogic server was the first J2EE application server.  1995: WebLogic, Inc. founded.  1997: First release of WebLogic Tengah.  1998: WebLogic, Inc., acquired by BEA Systems.  2008: BEA Systems acquired by Oracle.  2020: WebLogic Server version 14 released.
  • 413.
    Database Systems Handbook BY:MUHAMMAD SHARIF 413 WebLogic is an Application Server that runs on a middle tier, between back-end databases and related applications and browser-based thin clients. WebLogic Server mediates the exchange of requests from the client tier with responses from the back-end tier. WebLogic Server is based on Java Platform, Enterprise Edition (Java EE) (formerly known as Java 2 Platform, Enterprise Edition or J2EE), the standard platform used to create Java-based multi-tier enterprise applications. Oracle WebLogic Server vs. Apache Tomcat The Apache Tomcat web server is often compared with WebLogic Server. The Tomcat web server serves static content and dynamic content in web applications delivered in Java servlets and JavaServer Pages. Programming Models WebLogic Server provides complete support for the Java EE 6.0. Web Applications provide the basic Java EE mechanism for deployment of dynamic Web pages based on the Java EE standards of servlets and JavaServer Pages (JSP). Web applications are also used to serve static Web content such as HTML pages and image files. Web Services provide a shared set of functions that are available to other systems on a network and can be used as a component of distributed Web-based applications. XML capabilities include data exchange, and a means to store content independent of its presentation, and more.
  • 414.
    Database Systems Handbook BY:MUHAMMAD SHARIF 414 Java Messaging Service (JMS) enables applications to communicate with one another through the exchange of messages. A message is a request, report, and/or event that contains information needed to coordinate communication between different applications. Java Database Connectivity (JDBC) provides pooled access to DBMS resources. Resource Adapters provide connectivity to Enterprise Information Systems (EISes). Enterprise JavaBeans (EJB) provide Java objects to encapsulate data and business logic. Remote Method Invocation (RMI) is the Java standard for distributed object computing, allowing applications to invoke methods on a remote object locally. Security APIs allow you to integrate authentication and authorization into your Java EE applications. You can also use the Security Provider APIs to create your own custom security providers. WebLogic Tuxedo Connectivity (WTC) provides interoperability between WebLogic Server applications and Tuxedo services. WTC allows WebLogic Server clients to invoke Tuxedo services and Tuxedo clients to invoke EJBs in response to a service request.
  • 415.
    Database Systems Handbook BY:MUHAMMAD SHARIF 415 Oracle's service oriented architecture (SOA)
  • 416.
    Database Systems Handbook BY:MUHAMMAD SHARIF 416 Oracle Fusion Applications Architecture Oracle offers three distinct products as part of the Oracle WebLogic Server 11g application family:  Oracle WebLogic Server Standard Edition (SE)  Oracle WebLogic Server Enterprise Edition (EE)  Oracle WebLogic Suite Oracle WebLogic 11g Server Standard Edition The WebLogic Server Standard Edition (SE) is a full-featured server, but is mainly intended for developers to develop enterprise applications quickly. WebLogic Server SE implements all the Java EE standards and offers management capabilities through the Administration Console.
  • 417.
    Database Systems Handbook BY:MUHAMMAD SHARIF 417 Oracle WebLogic 11g Server Enterprise Edition Oracle WebLogic Server EE is designed for mission-critical applications that require high availability and advanced diagnostic capabilities. The EE version contains all the features of the SE version, of course, but in addition supports clustering of servers for high availability and the ability to manage multiple domains, plus various diagnostic tools. Oracle WebLogic Suite 11g Oracle WebLogic Suite offers support for dynamic scale-out applications with features such as in-memory data grid technology and comprehensive management capabilities. It consists of the following components:  Oracle WebLogic Server EE  Oracle Coherence (provides in-memory caching)  Oracle Top Link (provides persistence functionality)  Oracle JRockit (for low-latency, high-throughput transactions)  Enterprise Manager (Admin & Operations)  Development Tools (jdeveloper/eclipse) How weblogic server operates? A bigpicture.
  • 418.
    Database Systems Handbook BY:MUHAMMAD SHARIF 418
  • 419.
    Database Systems Handbook BY:MUHAMMAD SHARIF 419
  • 420.
    Database Systems Handbook BY:MUHAMMAD SHARIF 420
  • 421.
    Database Systems Handbook BY:MUHAMMAD SHARIF 421
  • 422.
    Database Systems Handbook BY:MUHAMMAD SHARIF 422
  • 423.
    Database Systems Handbook BY:MUHAMMAD SHARIF 423
  • 424.
    Database Systems Handbook BY:MUHAMMAD SHARIF 424 J2EE Platform
  • 425.
    Database Systems Handbook BY:MUHAMMAD SHARIF 425 WebLogic Server contains Java 2 Platform, Enterprise Edition (J2EE) technologies. J2EE is the standard platform for developing multitier enterprise applications based on the Java programming language. The technologies that make up J2EE were developed collaboratively by Sun Microsystems and other software vendors, including BEA Systems. J2EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those components and handles many details of application behavior automatically, without requiring programming. J2EE Platform and WebLogic Server WebLogic Server implements Java 2 Platform, Enterprise Edition (J2EE) version 1.3 technologies. J2EE is the standard platform for developing multi-tier Enterprise applications based on the Java programming language. The technologies that make up J2EE were developed collaboratively by Sun Microsystems and other software vendors, including BEA Systems. WebLogic Server J2EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. Note: Because J2EE is backward compatible, you can still run J2EE 1.3 applications on WebLogic Server versions 7.x and later. What Are WebLogic Server J2EE Applications and Modules? A BEA WebLogic ServerTM J2EE application consists of one of the following modules or applications running on WebLogic Server: Web application modules—HTML pages, servlets, JavaServer Pages, and related files. See Web Application Modules. Enterprise Java Beans (EJB) modules—entity beans, session beans, and message-driven beans. See Enterprise JavaBean Modules. Connector modules—resource adapters. See Connector Modules. Enterprise applications—Web application modules, EJB modules, and resource adapters packaged into an application. See Enterprise Applications. Web Application Modules A Web application on WebLogic Server includes the following files: At least one servlet or JSP, along with any helper classes. A web.xml deployment descriptor, a J2EE standard XML document that describes the contents of a WAR file. Optionally, a weblogic.xml deployment descriptor, an XML document containing WebLogic Server-specific elements for Web applications. A Web application can also include HTML and XML pages with supporting files such as images and multimedia files. Servlets Servlets are Java classes that execute in WebLogic Server, accept a request from a client, process it, and optionally return a response to the client. An HttpServlet is most often used to generate dynamic Web pages in response to Web browser requests. JavaServer Pages JavaServer Pages (JSPs) are Web pages coded with an extended HTML that makes it possible to embed Java code in a Web page. JSPs can call custom Java classes, known as tag libraries, using HTML-like tags. The appc compiler compiles JSPs and translates them into servlets. WebLogic Server automatically compiles JSPs if the servlet class file is not present or is older than the JSP source file. See Using Ant Tasks to Create Compile Scripts. You can also precompile JSPs and package the servlet class in a Web archive (WAR) file to avoid compiling in the server. Servlets and JSPs may require additional helper classes that must also be deployed with the Web application. Overview of WebLogic Resource Types WebLogic resources are hierarchical. Therefore, the level at which you define security roles and security policies is up to you. For example, you can define security roles and security policies for an entire Enterprise Application (EAR),
  • 426.
    Database Systems Handbook BY:MUHAMMAD SHARIF 426 an Enterprise JavaBean (EJB) JAR containing multiple EJBs, a particular EJB within that JAR, or a single method within that EJB. Administrative Resources An Administrative resource is a type of WebLogic resource that allows users to perform administrative tasks. Examples of Administrative resources include the WebLogic Server Administration Console, the weblogic.Admin tool, and MBean APIs. Administrative resources are limited in scope. Currently, you can only secure the User Lockout operation on an Administrative resource using the WebLogic Server Administration Console. This operation provides compatibility with WebLogic Server 6.x., and allows users who meet the security requirements to unlock users who have been locked out of their accounts. For more information about user lockout, see Protecting User Accounts in Managing WebLogic Security. Application Resources An Application resource is a type of WebLogic resource that represents an Enterprise Application, packaged as an EAR (Enterprise Application aRchive) file. Unlike the other types of WebLogic resources, the hierarchy of an Application resource is a mechanism for containment, rather than a type hierarchy. You secure an Application resource when you want to protect multiple WebLogic resources that constitute the Enterprise Application (for example, EJB resources, URL resources, and Web Service resources). In other words, securing an Enterprise Application will cause all the WebLogic resources within that application to inherit its security configuration. You can also secure, on an individual basis, the WebLogic resources that constitute an Enterprise Application (EAR). Securing a resource by both means causes the individual security configuration to override the security configuration inherited from the Enterprise Application for that WebLogic resource. Enterprise Information Systems (EIS) Resources A J2EE Connector is a system-level software driver used by an application server such as WebLogic Server to connect to an Enterprise Information System (EIS). BEA supports Connectors developed by EIS vendors and third-party application developers that can be deployed in any application server supporting the Sun Microsystems J2EE Platform Specification, Version 1.3. Connectors, also known as Resource Adapters, contain the Java, and if necessary, the native components required to interact with the EIS. An Enterprise Information System (EIS) resource is a specific type of WebLogic resource that is designed as a Connector. To secure access to an EIS, you create security policies and security roles for all Connectors as a group, or for individual Connectors. Information about securing EIS resources can be found both in this document, and in the Security section of Programming WebLogic J2EE Connectors. Instructions for creating the credential maps for use with EIS resources are available in the Single Sign-On with Enterprise Information Systems section of Managing WebLogic Security. COM Resources WebLogic jCOM is a software bridge that allows bidirectional access between Java/J2EE objects deployed in WebLogic Server, and Microsoft ActiveX components available within the Microsoft Office family of products, Visual Basic and C++ objects, and other Component Object Model/Distributed Component Object Model (COM/DCOM) environments. A COM resource is a specific type of WebLogic resource that is designed as a program component object according to Microsoft's framework. To secure COM components accessed through BEA's bi-directional COM-Java (jCOM) bridging tool, you create security policies and security roles for packages containing multiple COM classes, or for individual COM classes. Information about securing COM resources can be found both in this document and in the Configuring Access Control section of Programming WebLogic jCOM.
  • 427.
    Database Systems Handbook BY:MUHAMMAD SHARIF 427 Java DataBase Connectivity (JDBC) Resources A Java DataBase Connectivity (JDBC) resource is a specific type of WebLogic resource that is related to JDBC. To secure JDBC database access, you can create security policies and security roles for all connection pools as a group, individual connection pools, and MultiPools. When you secure individual connection pools, you can choose whether to protect all operations on the connection pool, or protect one of the following operations: Admin—The following methods on the JDBCConnectionPoolRuntimeMBean are invoked as admin operations: clearStatementCache, destroy, disableDroppingUsers, disableFreezingUsers, enable, forceDestroy, forceShutdown, forceSuspend, getProperties, poolExists, resume, shutdown, shutdownHard, shutdownSoft, and suspend. reserve—Applications reserve a connection in the connection pool by looking up the data source that points to the connection pool and then calling getConnection. Note: Giving a user the reserve permission enables them to execute vendor-specific operations on the connection. Depending on the database vendor, some of these operations may have database security implications. Shrink—Shrinks the connection pool to the maximum of the currently reserved connections or the initial size. reset—Resets the database connection pool by shutting down and re-establishing all physical database connections. This also clears the statement cache for each connection in the connection pool. You can only reset a normally running connection pool. Note: If a security policy controls access to a connection pool that is in a MultiPool, access checks are performed at both levels of the JDBC resource hierarchy (once at the MultiPool level, and again at the individual connection pool level). As with all types of WebLogic resources, this double-checking ensures that the most restrictive security policy controls access. Note: If you are an Oracle user, you can also control access to JDBC resources using an Oracle Virtual Private Database (VPD). For more information, see Programming with Oracle Private Virtual Databases in Using Third-Party Drivers with WebLogic Server. ODBC and JDBC details
  • 428.
    Database Systems Handbook BY:MUHAMMAD SHARIF 428
  • 429.
    Database Systems Handbook BY:MUHAMMAD SHARIF 429
  • 430.
    Database Systems Handbook BY:MUHAMMAD SHARIF 430
  • 431.
    Database Systems Handbook BY:MUHAMMAD SHARIF 431
  • 432.
    Database Systems Handbook BY:MUHAMMAD SHARIF 432
  • 433.
    Database Systems Handbook BY:MUHAMMAD SHARIF 433
  • 434.
    Database Systems Handbook BY:MUHAMMAD SHARIF 434
  • 435.
    Database Systems Handbook BY:MUHAMMAD SHARIF 435 Java Messaging Service (JMS) Resources A Java Messaging Service (JMS) resource is a specific type of WebLogic resource that is related to JMS. To secure JMS destinations, you create security policies and security roles for all destinations (JMS queues and JMS topics) as a group, or an individual destination (JMS queue or JMS topic) on a JMS server. When you secure a particular destination on a JMS server, you can protect all operations on the destination, or protect one of the following operations:
  • 436.
    Database Systems Handbook BY:MUHAMMAD SHARIF 436 Send—Required to send a message to a queue or a topic. This includes calls to the MessageProducer.send(), QueueSender.send(), and TopicPublisher.publish() methods, as well as the Messaging Bridge. receive—Required to create a consumer on a queue or a topic. This includes calls to the Session.createConsumer(), Session.createDurableSubscriber(), QueueSession.createReceiver(), TopicSession.createSubscriber(), TopicSession.createDurableSubscriber(), Connection.createConnectionConsumer(), Connection.createDurableConnectionConsumer(), QueueConnection.createConnectionConsumer(), TopicConnection.createConnectionConsumer(), and TopicConnection.createDurableConnectionConsumer() methods, as well as the Messaging Bridge and message-driven beans. Browse—Required to view the messages on a queue using the QueueBrowser interface. Java Naming and Directory Interface (JNDI) Resources JNDI provides a common-denominator interface to many existing naming services, such as Lightweight Directory Access Protocol (LDAP) and Domain Name System (DNS). These naming services maintain a set of bindings, which relate names to objects and provide the ability to look up objects by name. JNDI allows the components in distributed applications to locate each other. ===========================END=========================
  • 437.
    Database Systems Handbook BY:MUHAMMAD SHARIF 437